As IT leaders move away from VMware, they face a critical decision: do they stick with traditional storage architectures, or is now the time to finally unlock the full potential of an infrastructure that converges virtualization, storage, and networking technologies?

Early convergence efforts centered on hyperconverged infrastructure (HCI), where storage ran as a virtual machine under the hypervisor, commonly called a vSAN. While adoption has lagged behind traditional three-tier architectures, recent advancements have significantly improved vSAN, making it worth reconsidering by addressing past shortcomings.

  • mosiacmango
    link
    fedilink
    111 day ago

    The above is an ad, but it both open about it and technically interesting, if a bit repetitive.

    I honestly had no idea a vSAN was primarily running on hidden storage VMs on the host. I thought the storage stack was as tightly integrated as networking/compute at the hypervisor level, so the light breakdown there was worth the read.

    • @computergeek125
      link
      English
      2
      edit-2
      1 day ago

      Not all of them. Ceph on Proxmox and (iirc) VMware vSAN run bare metal. That statement was a call-out post for Nutanix, which runs their storage inside a VM cluster. Both of these have been doing so for years.

      • mosiacmango
        link
        fedilink
        123 hours ago

        Okay, that tracks better. Im familiar with ceph and promox using it as a “fake” vSAN. Im also familiar with Vmwares vSAN and had never seen any indicator of an internal storage VM, so that was odd.

        It being Nutanix doing the above makes sense as I haven’t worked with them yet.

        • @computergeek125
          link
          English
          122 hours ago

          Out of curiosity why would you call Ceph a fake HCI? As far as I’ve seen, it behaves akin to any of the other HCI systems I’ve used.

          • mosiacmango
            link
            fedilink
            1
            edit-2
            20 hours ago

            Id call it viable hyper-converged infastructure when it in use as such like with proxmox, but its not scoped to just being a vSAN. Its a distributed storage network. its design is way wider than just being used for HCI/vSAN/etc.

  • HorseChandelier
    link
    English
    31 day ago

    I have a small cluster running using Starwind for my vSAN. For me it’s much cheaper than a hardware equivalent and is performant enough.

    Oh and I haven’t had a “stop work” issue with it in 8 years.

    Somewhat remarkably it was OK performance-wise when sync/iSCSI traffic were running on 1Gb copper connections to spinning rust storage… Now I have 10Gb fibre between the hosts, coupled with nvme drives, and it’s quite (comparatively) quick.

    As with all things YMMV… But vSAN is the way for my use case.

  • comador
    link
    82 days ago

    “comparing VMware vSAN, Nutanix AOS Storage and VergeIO VergeOS”

    Hint: There is no vSAN/UCI/HCI silver bullet.

    They all suffer from performance related issues either up front or over time, have scalability deficits or limitations and lackluster price to performance ratios (Nutanix can suck my 5yr forced upgrade balls).

  • @Brkdncr
    link
    42 days ago

    Did small Nutanix deployments a while ago, and liked it enough to go all in on vSAN for the datacenter about 7 years ago.

    No regrets. Great performance, no san storage, no San network. Easy to manage on dell readynodes.

    Going to go all in with Nutanix next round. Same costs but it’s not Broadcom.

  • Justin
    link
    fedilink
    31 day ago

    Ceph / Openshift Data Foundation is also an option for hyper converged clustered storage.

    • @ikidd
      link
      31 day ago

      Ceph is also quite tightly integrated on Proxmox for smaller deployments.

      • Justin
        link
        fedilink
        1
        edit-2
        22 hours ago

        Yup! Ceph is quite nice once you’re up at the ~6 node scale. Openshift Data Foundation is based on Rook, which is a nice way to automate Ceph deployment, as well.

        I’ve actually been running Rook on my home server rack, up to 120TiB now 😁