- cross-posted to:
- [email protected]
- [email protected]
- technology
- cross-posted to:
- [email protected]
- [email protected]
- technology
cross-posted from: https://lemmy.ml/post/18154572
All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.
Apparently caused by a bad CrowdStrike update.
I work on the field. They pushed an update, their software loads a low altitude driver on the kernel at boot time. The driver is causing servers and workstations to crash. The only fix so far is to reboot in safe mode, restart, uninstall and restart again.
Imagine having to do that by hand, on every windows device in your organization.
There are people on r/sysadmin that have 50,000 machines to deal with. Also a lot of companies have remote workers
Meh, that’s an easy fix compared to some other BSODs I’ve had to deal with.
The every device part is daunting but again, at least it’s an easy fix.
But have those ever been released as an update?
And with the employee to computer ratio only getting worse, this really highlights a lot of issues in the system
But have those ever been released as an update?
What BSOD? Many times.
That’s actually really funny, I’m more of a Linux user so I didn’t realise how down bad things are over there
It really isn’t.
Or wasn’t until yesterday
Not really if you test things.
You are testing things right?
I was with a large organization as tech support and upper IT pushed out an update that corrupted everybody’s certificates to log into the network. Imagine having to talk 40k users, who most of them whined and bitched to us about having to do all this work to fix the computer, through a removal of the old certificate, reboot, get the new one after logging in with a backup account, rebooting again and verifying that they can log in. Each computer was about 20-40 minutes to get done. We only had about 50 of us working at peak hours. It took about 2 months of non stop calls to get them fixed.
YIKES … I got one worse still … I was NOC at a company where one of my friends from Desktop services made a mistake pushing hard drive encryption and basically corrupted the hard drive of a large number em laptops. It wasn’t everyone thank god because they were rolling it out in stages … but it was THOUSANDS and there was no real way to get it back. Every single one had to get re-imaged.
Somebody forgot about No Changes Friday.
This has been a lot of fun, from the perspective of someone not affected, apparently CrowdStrike have lost 20% of their share price today.
The fact that they can fuck up this bad and only lose 20 percent is kind of hilarious. Major infrastructure across the world is on its knees because of their fuck up. If that’s not enough to kill them then nothing is.
Edit - when I checked they were rebounding, only down 11.3 percent today now. I guess the stockmarket has determined that this fuckup isn’t so bad.
They are in trouble as a company. I bet the big market leaders are going to reevaluate and potentially move to something else