• @breadsmasher
      link
      English
      32 months ago

      No idea why someone downvoted you for this!

      Absolutely, we are entirely unsure of real root cause too (until microsoft releases an explanation).

      We have pretty simple networking, but for a while internal vnet communication was really all over the place. That seems to have stabilised for us recently

      (UK south / west regions)

        • @breadsmasher
          link
          English
          22 months ago

          … If your shift isnt getting cut …

          Exactly! We made sure to contact everyone who would travel to an office to inform them its at least a half day off and to check in at lunch time. Email everyone WFH the same thing, SMS too just in case their computer is down.

          But I (I assume you are in the same boat) was up much earlier to tackle this! Fun day.

          • @SirDerpy
            link
            1
            edit-2
            2 months ago

            I’ve got easy mode: On-call woke me up. We’re five techies and equity holders with one employee, an intern. We pushed a few buttons to effect plan B and enable plan C, and sent a medium priority notification by chat.

            Then, we sent the intern to the office to watch the meters. If she fucks up it’s not a problem until about Tuesday. So, we’re waiting to see if she automates the task, uses the existing routing template to ensure she’s notified of wonkiness if she leaves the office, and asks permission. She’s recreating the wheel and can then compare to our work. I thought I’d give her a couple hours before checking in.

    • @JonsJavaM
      link
      32 months ago

      Article has been updated with the root cause - Crowdstrike. The reason is simple: Azure has tons of Windows systems that are protected with CrowdStrike Falcon. Crowdstrke released a bad version that is causing boot loops on Windows computers, including Windows VM servers.