After reading this article, I had a few dissenting thoughts, maybe someone will provide their perspective?

The article suggests not running critical workloads virtually based on a failure scenario of the hosting environment (such as ransomware on hypervisor).

That does allow using the ‘all your eggs in one basket’ phrase, so I agree that running at least one instance of a service physically could be justified, but threat actors will be trying to time execution of attacks against both if possible. Adding complexity works both ways here.

I don’t really agree with the comments about not patching however. The premise that the physical workload or instance would be patched or updated more than the virtual one seems unrelated. A hesitance to patch systems is more about up time vs downtime vs breaking vs risk in my opinion.

Is your organization running critical workloads virtual like anything else, combination physical and virtual, or combination of all previous plus cloud solutions (off prem)?

  • @[email protected]
    link
    fedilink
    English
    1820 hours ago

    Most organizations will avoid patching due to the downtime alone, instead using other mitigations to avoid exploitation.

    If you can’t patch because of downtime, maybe you are cheaping out too much on redundancy?

    • @[email protected]
      link
      fedilink
      English
      31 hour ago

      That immediately stuck out to me as well, what a lame excuse not to patch. I’ve been in IT for a while now, and I’ve never worked in any shop that would let that slide.

    • RedFoxOP
      link
      fedilink
      English
      214 hours ago

      Yeah, that’s pretty risky for this point in time.

      I guess the MBA people look at total cost of revenue/reputation loss for things like ransomware recovery, restoration of backups vs the cost of making their IT systems resilient?

      Personally, I don’t think so (in many cases) or they’d spend more money on planning/resilience.

  • @[email protected]
    link
    fedilink
    English
    12
    edit-2
    17 hours ago

    I work for a newspaper. It was published without fail every single day since 1945 (when my country was still basically just rubble, deservedly).
    So even when all our systems are encrypted by ransomware, the newspaper MUST BE ABLE TO BE PRINTED as a matter of principle.
    We run all our systems virtualized, because everything else would be unmaintainable and it’s a 24/7 operation.

    But we also have a copy of the most essential systems running on bare metal, completely air-gapped from everything else, and the internet.
    Even I as the admin can’t access them remotely in any way. If I want to, I have to walk over to another building.

    In case of a ransomware attack, the core team meets in a room with only internal wifi, and is given emergency laptops from storage with our software preinstalled. They produce the files for the paper, save them on a USB stick, and deliver that to the printing press.

      • @[email protected]
        link
        fedilink
        English
        28 hours ago

        We don’t. It’s a separate, simplified system that only lets the core team members access the layout-, editing- and typesetting-software that is locally installed on the bare metal servers.
        In emergency mode, they get written articles and images from the reporters via otherwise unused, remotely hosted email addresses, and as a second backup, Signal.
        They build the pages from that, send them to the printers, and the paper is printed old-school using photographic plates.

        • umami_wasabi
          link
          fedilink
          English
          27 hours ago

          That’s a very high degree of BCDR planning, and quite costly I assume.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            7 hours ago

            It’s less than the cost of our cybersecurity insurance, which will probably drop us on a technicality when the day comes.
            And it’s not entirely an economic decision. The paper is family-owned in the 3rd generation, historically relevant as one of the oldest papers in the country, and absolutely no one wants to be the one in charge when it doesn’t print for the first time ever.

    • RedFoxOP
      link
      fedilink
      English
      614 hours ago

      Seems like your org has taken resilience and response planning seriously. I like it.

      • @[email protected]
        link
        fedilink
        English
        514 hours ago

        Another newspaper in our region was unprepared and got ransomwared. They’re still not back to normal, over a year later.
        After that, our IT basically got a blank check from executive to do whatever is necessary.

        • RedFoxOP
          link
          fedilink
          English
          514 hours ago

          Blank check

          Funny how that seems to often be the case. They need to see the consequences, not just be warned. An ‘I told you so’ moment…

          • @[email protected]
            link
            fedilink
            English
            212 hours ago

            I’m just glad they got to see the consequences in another company.
            Their senior IT admin had a heart attack a month after the ransomware attack.

    • @[email protected]
      link
      fedilink
      English
      317 hours ago

      save them on a USB stick

      …which is also kept with the air-gaped system and tossed once used, i assume…

      • @[email protected]
        link
        fedilink
        English
        417 hours ago

        There’s several for redundancy, in their original packaging, locked in a safe, and replaced yearly.

  • @[email protected]
    link
    fedilink
    English
    417 hours ago

    It the virtual borks, spin it back up. That’s a plus.

    Some should run at least one instance baremetal, like domain controllers.

    It’s not a one-size-fits-all.

  • @Im_old
    link
    English
    261 day ago

    That article is SO wrong. You don’t run one instance of a tier1 application. And they are on separate DCs, on separate networks, and the firewall rules allow only for application traffic. Management (rdp/ssh) is from another network, through bastion servers. At the very least you have daily/monthly/yearly (yes, yearly) backups. And you take snapshots before patching/app upgrades. Or you even move to containers, with bare hypervisors deployed in minutes via netinstall, configured via ansible. You got infected? Too bad, reinstall and redeploy. There will be downtime but not horrible. The DBs/storage are another matter of course, but that’s why you have synchronous and asynchronous replicas, read only replicas, offsites, etc. But for the love of what you have dear, don’t run stuff on bare metal because “what if the hypervisor gets infected”. Consider the attack vector and work around that.

    • @thirteene
      link
      English
      41 day ago

      You can prevent downtime by mirroring your container repository and keeping a cold stack in a different cloud service. We wrote an loe, decided the extra maintenance wasn’t worth the effort to plan for provider failures. But then providers only sign contracts if you are in their cloud and you end up doing it anyways.

      Unfortunately most victims aren’t using best practices let alone industry standards. The author definitely learned the wrong lesson though.

  • @[email protected]
    link
    fedilink
    English
    18
    edit-2
    1 day ago

    If the hypervisor or any of its components are exposed to the Internet

    Lemme stop you right there, wtf are you doing exposing that to the internet…

    (This is directed at the article writer, not OP)

    • RedFoxOP
      link
      fedilink
      English
      314 hours ago

      Lol, even in 2024 with free VPN/overlay solutions…they just won’t stop public Internet exposure of control plane things…

      • @[email protected]
        link
        fedilink
        English
        220 hours ago

        Sure, but the author makes it sounds like thats its their standard way of doing things, which is insane.

        And if you do have a misconfiguration, the rational thing is to fix that, not dump the entire platform.

    • @terminhell
      link
      English
      21 day ago

      True horrors

      Like, that’s what vpns and jump boxes are for at the very least.

      • @[email protected]
        link
        fedilink
        English
        220 hours ago

        Wanna bet they expose SSH on port 22 to the internet on their “critical” servers? 🤣

  • @linearchaos
    link
    English
    111 day ago

    Heh, whatever you do don’t do what everybody in the world has been doing successfully for the past 20 years.

  • @solrize
    link
    English
    61 day ago

    Most everything everywhere is virtual these days, even when the host hardware is single tenant. Companies running hosted applications on bare metal are rare. I run personal stuff that way because proxmox was too much hassle, but a more serious user would have just dealt with it.

  • @ramielrowe
    link
    English
    3
    edit-2
    1 day ago

    If we boil this article down to it’s most basic point, it actually has nothing to do with virtualization. The true issue here is actually centralized infra/application management. The article references two ESXi CVE’s that deal with compromised management interfaces. Imagine a scenario where we avoid virtualization by running Kubernetes on bare metal nodes, and each Pod gets exclusive assignment to a Node. If a threat actor has access to the Kubernetes management interface, and can exploit a vulnerability to access that management interface, it can immediately compromise everything within that Kubernetes cluster. We don’t even need to have a container management platform. Imagine a collection of bare-metal nodes managed by Ansible via Ansible Automation Platform (AAP). If a threat actor has access to AAP and exploit it, it then can compromise everything managed by that AAP instance. This author fundamentally misattributes the issue to virtualization. The issue is centralized management and there are significant benefits to using higher-order centralized management solutions.

    • RedFoxOP
      link
      fedilink
      English
      214 hours ago

      Agreed.

      Dont we all use centralized management because there is cost and risk involved when we don’t.

      More management complexity, missed systems, etc.

      So we’re balancing risk vs operational costs.

      Makes sense to swap out virtual for container solutions or automation solutions for discussion.

    • @francisfordpoopola
      link
      English
      123 hours ago

      Would you care to expand on this? I understand many of the pieces mentioned but am not an expert on this and am trying to learn.

      • @ramielrowe
        link
        English
        12 hours ago

        In a centralized management scenario, the central controlling service needs the ability to control everything registered with it. So, if the central controlling service is compromised, it is very likely that everything it controlled is also compromised. There are ways to mitigate this at the application level, like role-based and group-based access controls. But, if the service itself is compromised rather than an individual’s credentials, then the application protections can likely all be bypassed. You can mitigate this a bit by giving each tenant their own deployment of the controlling service, with network isolation between tenants. But, even that is still not fool-proof.

        Fundamentally, security is not solved by one golden thing. You need layers of protection. If one layer is compromised, others are hopefully still safe.

        • @francisfordpoopola
          link
          English
          1
          edit-2
          1 hour ago

          Makes perfect sense. I’m not as familiar with the admin side of things.

          TY for taking the time to explain.