Its called
How many people is this affecting?
Both articles just say “it’s bad, so bad”
Falcon Sensor is one of the most popular security products in Windows servers. Practically every large company purchases Crowdstrike services to protect their servers.
People who aren’t affected:
- Linux and Mac servers
- Private individuals and smaller businesess who have Windows machines that don’t buy CrowdStrike services.
- Companies that bothered to create proper test environments for their production servers.
People who are affected:
Companies that use Windows machines, buy Falcon Sensor from Crowdstrike, and are too stupid/cheap to have proper update policies.
In terms of numbers, we don’t know how many people are affected or how much it will cost. A lot. Globally. Flights were grounded, surgeries rescheduled, bank transfers and payments interrupted, and millions of employees couldn’t turn on their computers this morning.
proper test envs
Nah, let’s direct ship anything any vendor sends us.
“We need to allocate our available budget to profit-generating processes. This just seems like a luxury we can’t afford.”
-thousands of overpaid dipshits, yesterday.
Does anyone know how these Cloudstrike updates are actually deployed? Presumably the software has its own update mechanism to react to emergent threats without waiting for patch tuesday. Can users control the update policy for these ‘channel files’ themselves?
This doesn’t really answer my question but Crowdstrike do explain a bit here: https://www.crowdstrike.com/blog/technical-details-on-todays-outage/
These channel files are configuration for the driver and are pushed several times a day. It seems the driver can take a page fault if certain conditions are met. A mistake in a config file triggered this condition and put a lot of machines into a BSOD bootloop.
I think it makes sense that this was a preexisting bug in the driver which was triggered by an erroneous config. What I still don’t know is if these channel updates have a staged deployment (presumably driver updates do), and what fraction of machines that got the bad update actually had a BSOD.
Anyway, they should rewrite it in Rust.
I don’t know for sure, but I would imagine that it varies based on the service level.
Thank you very much
Damn this morning I wished so hard my company was in the affected group. Alas, we all still had to work.
Check out https://downdetector.com. It’s disrupting big business.
Is it saying each service had a few hundred complaints and then leveled out?
One of them had 7k.
But that isn’t only tracking this bsod thing right?
Correct, this is overall/all incidents.
I have not yet seen any effects in my large multinational organization.
I heard at Singapore international and a few Indian airports they had to write out all the tickets by hand.
Sounds terrible for the employees.
Oof.
Flights were grounded across the US for everything but southwest I think
Whoa thanks, I didn’t hear that
Wild
Yeah. It also affected banks, hospitals, retailers, distributors… someone definitely got fired. And it’s not even something that can be fixed remotely.
Oh I was wondering about that. Ha. Nice. Good foreshadowing for the next big solar flare.
deleted by creator
The pro Linux German government members being validated. 🦎🐧
Cyanotypists love international blue screen day!
Keep installing these compulsory updates which your overlords let you postpone but not to decline. Good sheep.
“Stop installing updates to your security software and let it stagnate”
Good sheep. Updates good, no updates bad.
I wonder how many unpatched zero days anything you are running has
How many programmers does it take to screw in a light bulb? None, they already screwed up everything they could.
This wasn’t a windows issue you fucking neanderthal.
Baaah, baaah, four legs good, two legs bad