• WxFisch
    link
    English
    202 months ago

    From my reading this is misleading at best and likely wrong. I don’t work with CrowdStrike Falcon but have installed and maintained very similar EDR tools in enterprise environments and the channel updates referenced are the modern version of definition updates for a classic AV engine. Being up to date is the entire point and so typically there are only global options to either grab those updates from the vendor or host them internally on a central server but you wouldn’t want to slow roll or stage those updates since that fundamentally reduces the protection from zero days and novel attacks that the product is specifically there to detect and stop. These are not engine updates in that they don’t change the code that is running, they give the code new information about what an attack will look like to allow it to detect malicious activity as soon as CrowdStrike knows what the IoCs look like.

    In this case it appears that one of these updates pointed to a bad memory location which caused the engine to crash the OS, but it wasn’t a code update that did it (like a software patch). That should have been caught in QA checks prior to the channel update being pushed out, but it’s in CrowdStrikes interest to push these updates to all of their customers PCs as quickly as they can to allow detection of novel attacks.

    • @[email protected]
      link
      fedilink
      English
      302 months ago

      That should have been caught in QA checks prior to the channel update being pushed out…

      I work in QA, and part of the job is justifying why it’s necessary to keep a team of people that doesn’t actually “produce” anything. Either their QA team is now in the hotseat, or Crowdstrike is now realizing why they need one.

      Either way, it sounds like a basic smoke test would have uncovered the issue, and the fact that nobody found this means nobody bothered to do one of the most basic tests: turn it on and see if it "catches fire.’

      • a1studmuffin 🇦🇺
        link
        fedilink
        English
        152 months ago

        God, even if they didn’t have QA test it, they should have had continuous integration running to test all new channel updates against all versions of their program, considering the update will affect all of them. What an epic process failure.

    • @[email protected]
      link
      fedilink
      English
      202 months ago

      Being up to date is the entire point and so typically there are only global options to either grab those updates from the vendor or host them internally on a central server but you wouldn’t want to slow roll or stage those updates since that fundamentally reduces the protection from zero days and novel attacks that the product is specifically there to detect and stop.

      That’s not your, or Crowdstrikes, decision to make. If organizations have applied settings to not install updates automatically then that’s what they expect to happen and you need to honour it. You don’t “know best”. They do.

    • @Docus
      link
      English
      132 months ago

      Being up to date is the entire point

      No, it isn’t. The point is to keep systems safe and operational. Blindly rolling out untested updates is not a good strategy for that. I have seen entire systems shut down due to false alerts from updated antivirus software. Luckily only test environments, before these updates were rolled out to production. It does not take much to test updates like this before rolling them out to your entire organisation.

    • @CriticalMiss
      link
      English
      112 months ago

      Our organization is configured to install N-1 of current release specifically to avoid this type of stuff. Does it matter? No, we got hit just like everyone else.

    • @[email protected]
      link
      fedilink
      English
      6
      edit-2
      2 months ago

      I’m getting real sick of companies acting like rapists and society just accepting it, if not justifying it for them.

      No means no. Plain and simple.