• @[email protected]
    link
    fedilink
    English
    1005 months ago

    “CrowdStrike said it also plans to move to a staggered approach to releasing content updates so that not everyone receives the same update at once, and to give customers more fine-grained control over when the updates are installed.”

    Hol up. So they like still get to exist? Microsoft and affected industries just gonna kinda move past this?

    • @[email protected]
      link
      fedilink
      English
      395 months ago

      Haven’t seen anything from the affected major players. Obviously Crowdstrike isn’t going to say they are fucked long term, they have to act like this is just a little hiccup and move on. Lawsuits are absolutely incoming

    • @Ledivin
      link
      English
      29
      edit-2
      5 months ago

      We’ll see how fucked they are from SLA breaches/etc., and then we’ll see how many companies jump ship to an alternative. We won’t have the real fallout from this event for months or years.

    • @Modern_medicine_isnt
      link
      English
      195 months ago

      Newsflash, Solarwinds still exists too. Not sure I could name a company that screwed up so big and actually paid the price.

      • @[email protected]
        link
        fedilink
        English
        145 months ago

        Yeah, what was I thinking. United airlines was bankrupt and literally beating people up on their planes and still got taxpayer payouts and is around paying investors divends still today.

      • @TheLimiter
        link
        English
        35 months ago

        Two days ago my company sent out an all hands email that we’re going company wide with Crowdstrike.

        • @[email protected]
          link
          fedilink
          English
          25 months ago

          Nows the time to sign up. They’ll slash prices and hopefully never fuck up this bad again.

          Have we had a XaaS fuck up real, real bad, twice, yet?

    • @[email protected]
      link
      fedilink
      English
      45 months ago

      I wasn’t effected but I bet a lot of admins, as pissed as they were, were thinking “I could easily fuck up this bad or worse”.

      • @jeeva
        link
        English
        1
        edit-2
        5 months ago

        Yeah, what’s the jokey parable thing?

        A CTO is at lunch when a call comes in. There’s been a huge outage, caused by a low level employee pressing the wrong button.
        “Damn, you going to fire that guy?”
        “Hell no, do you know how much I just spent on training him to never do that again?”

        (</Blah>)

    • Kairos
      link
      fedilink
      English
      25 months ago

      Companies using CrowdStrike and Windows aren’t really the type to be active about this sort of thing.

        • Kairos
          link
          fedilink
          English
          -25 months ago

          The companies who use CrowdStrike (lazy fix) on Windows (garbage OS) aren’t really the type to want to switch away from it (will take effort)

          • @[email protected]
            link
            fedilink
            English
            45 months ago

            I don’t understand the downvotes. You’re right on all points. If the task is too big, it can take years from testing another solution to using it for real.

  • ditty
    link
    fedilink
    English
    895 months ago

    $5.4 Bn so far, not including lost worker productivity or damage to brand reputations, so that’s a very conservative estimate. And Cybersecurity insurance will supposedly only cover up to 20% of that (but good luck getting even that much). What a clusterf***

      • @11111one11111
        link
        English
        35 months ago

        No it’s all of them because all the companies combined out side of the 500 wouldn’t even have enough net worth large enough to move the needle. So technically they may not be included but would be covered by whatever amount they rounded up to make the even 5.4b

        • @[email protected]
          link
          fedilink
          English
          35 months ago

          All the CrowdStrike companies on earth minus the 500 biggest (American) ones? I have a hard time believing it’s as insignificant as you assume. I guess we’ll see…

          • flatlined
            link
            fedilink
            English
            25 months ago

            It’s a variation on the old saw of “how much is the difference between a million and a billion? About a billion”. Once numbers become so big, it’s hard to grasp the relative sizes. That said, I’m also interested in a more comprehensive breakdown. Seeing who are impacted, how much and where.

            • @11111one11111
              link
              English
              15 months ago

              100% correct. I wasn’t implying that I knew the figures just that the size of the Fortune 500 is used as an economic index for this reason.

  • @[email protected]
    link
    fedilink
    English
    42
    edit-2
    5 months ago

    On Wednesday, CrowdStrike released a report outlining the initial results of its investigation into the incident, which involved a file that helps CrowdStrike’s security platform look for signs of malicious hacking on customer devices.

    The company routinely tests its software updates before pushing them out to customers, CrowdStrike said in the report. But on July 19, a bug in CrowdStrike’s cloud-based testing system — specifically, the part that runs validation checks on new updates prior to release — ended up allowing the software to be pushed out “despite containing problematic content data.”

    When Windows devices using CrowdStrike’s cybersecurity tools tried to access the flawed file, it caused an “out-of-bounds memory read” that “could not be gracefully handled, resulting in a Windows operating system crash,” CrowdStrike said.

    Couldn’t it, though? 🤔

    And CrowdStrike said it also plans to move to a staggered approach to releasing content updates so that not everyone receives the same update at once, and to give customers more fine-grained control over when the updates are installed.

    I thought they were already supposed to be doing this?

    • @whatwhatwhatwhat
      link
      English
      95 months ago

      The fact that they weren’t already doing staggered releases is mind-boggling. I work for a company with a minuscule fraction of CrowdStrike’s user base / value, and even we do staggered releases.

      • @foggenbooty
        link
        English
        35 months ago

        They do have staggered releases, but it’s a bit more complicated. The client that you run does have versioning and you can choose to lag behind the current build, but this was a bad definition update. Most people want the latest definition to protect themselves from zero days. The whole thing is complicated and a but wonky, but the real issue here is cloudflare’s kernel driver not validating the content of the definition before loading it.

        • @whatwhatwhatwhat
          link
          English
          25 months ago

          Makes sense that it was a definitions update that caused this, and I get why that’s not something you’d want to lag behind on like you could with the agent. (Putting aside that one of the selling points of next-gen AV/EDR tools is that they’re less reliant on definitions updates compared to traditional AV.) It’s just a bit wild that there isn’t more testing in place.

          It’s like we’re always walking this fine line between “security at all costs” vs “stability, convenience, etc”. By pushing definitions as quickly as possible, you improve security, but you’re taking some level of risk too. In some alternate universe, CS didn’t push definitions quickly enough, and a bunch of companies got hit with a zero-day. I’d say it’s an impossible situation sometimes, but if I had to choose between outage or data breach, I’m choosing outage every time.

    • @Plopp
      link
      English
      35 months ago

      Couldn’t it, though? 🤔

      IANAD and AFAIU, not in kernel mode. Things like trying to read non existing memory in kernel mode are supposed to crash the system because continuing could be worse.

        • @chaospatterns
          link
          English
          1
          edit-2
          5 months ago

          They could and clearly they should have done that but hindsight is 20/20. Software is complex and there’s a lot of places that invalid data could come in.

    • @[email protected]
      link
      fedilink
      English
      25 months ago

      The company routinely tests its software updates before pushing them out to customers, CrowdStrike said in the report. But on July 19, a bug in CrowdStrike’s cloud-based testing system — specifically, the part that runs validation checks on new updates prior to release — ended up allowing the software to be pushed out “despite containing problematic content data.”

      It is time to write tests for tests!

      • @Passerby6497
        link
        English
        15 months ago

        My thoughts are to have a set of machines that have to run the update for a while, and if any single machine doesn’t pass and all allow it to move forward, it halts any further rollout.

    • @AA5B
      link
      English
      1
      edit-2
      5 months ago

      a bug in CrowdStrike’s cloud-based testing system

      Always blame the tests. There are so many dark patterns in this industry including blaming qa for being the last group to touch a release, that I never believe “it’s the tests”.

      There’s usually something more systemic going on where something like this is missed by project management and developers, or maybe they have a blind spot that it will never happen, or maybe there’s a lack of communication or planning, or maybe they outsourced testing to the cheapest offshore providers, or maybe everyone has huge time pressure, but “it’s the tests”

      Ok, maybe I’m not impartial, but when I’m doing a root cause on how something like this got out, my employer expects a better answer than “it’s the tests”

  • @essteeyou
    link
    English
    255 months ago

    Oh, finally, I have been waiting for so long.

  • @Wispy2891
    link
    English
    195 months ago

    This crowdstrike stuff seems an expensive subscription

    I saw a lot of photos of crashed ad screens.

    Why the hell are corps paying this much money for windows+cloudstrike for a glorified digital picture frame?? Wouldn’t be 100x cheaper to do it with some embedded stuff instead of having a full desktop computer running a full desktop os???

    • @[email protected]
      link
      fedilink
      English
      25 months ago

      Yeah, an RPi or similar with a screen would be more than plenty for this, and the Pi Zero is really small. Connect that to a central Linux server with a hot backup or two (through local DNS) and you’ll have a hard time crashing it.

  • Semi-Hemi-Lemmygod
    link
    English
    145 months ago

    For the rest of history this sort of thing will mention Crowdstrike, or it might even be called a “crowdstrike.”

    You can’t buy that kind of marketing

  • @riodoro1
    link
    English
    75 months ago

    Ok. Can we get a solar storm next? I want linux servers out this time too.

  • @[email protected]
    link
    fedilink
    English
    65 months ago

    Do we actually know? We might know that Crowdstrike was the cause but we don’t actually know what went wrong and how it happened. It is an unfree proprietary closed source software, we just have to take their word for it, which for all purposes is PR in line with the fact that it is coming from a profit-driven organisation.

    • @lightsblinken
      link
      English
      15 months ago

      this is exactly the question that needs answering… the PIR is bullshit

  • @[email protected]
    link
    fedilink
    English
    15 months ago

    Pretty soon we are gonna have to start deciding if it’s safer for enterprise computers to run without AV or AMP.

  • @bigFab
    link
    English
    15 months ago

    Beautiful