I have an old x86_64 computer which I am planning to use as a NAS. Which of the 2 is a better option? Is it helpful or better to run on bare metal or as a VM on proxmox?

  • @TBi
    link
    English
    121 year ago

    I run OMV baremetal for last few years. I really like it. I did try TrueNAS and found it too convoluted.

    Nextcloud is a totally different beast that would run on top of either.

    OMV allows you to use ZFS as well if you want (I do) but also the flexibility to use other raid systems.

    I just wish they’d left in portainer as default. No point reinventing the docker wheel…

  • @[email protected]
    link
    fedilink
    English
    7
    edit-2
    1 year ago

    I verry much like OMV, simply because i could use it with knowledge i already have.

    It is a Debian system -> I understand that

    It uses normal RAID and ext4 -> I understand that (Plus, if things go wrong, I can just pop the drives back into any other computer and use it without much fuss)

    It has a rSnapshot plug in -> I already used that in the past.

    Sure I’m missing out on fancy stuff like BTRFS / ZFS, but i can live with that.

    • @InverseParallax
      link
      English
      31 year ago

      Lvm can get you 80% of the way to basic zfs. I’m a bit similar, old school Unix who likes debian because it makes sense, but storage is one of my dominating constraints so zfs is mandatory (even if I hate stuff like the way the arc works by default).

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      TrueNAS Scale is also Debian and uses standard ZFS for drive management. However I’m not a fan of the TrueNAS webUI, it feels overly complex and stuff can be hard to find.

      OMV + MergerFS, SnapRAID, or just MDADM is nice and simple.

  • @stanleytweedle
    link
    English
    71 year ago

    I’ve had a little OMV VM running on Proxmox for about 4 years with no issues at all.

    • @reddit_suxOP
      link
      English
      21 year ago

      What benefits to running it as an VM rather than on bare metal.

      • @stanleytweedle
        link
        English
        71 year ago

        For me just the convenience of having everything in one box. Simplifies networking too. I run home assistant, openwrt, OMV, an ubuntu dtop VM and a wordpress LXC on a little m93 I jacked up with 32Gb RAM. Backups are dead simple and it’s all on one little UPS.

        Some might prefer metal for other reasons but simplicity and convenience are priorities for me, at least in my homelab.

      • @InverseParallax
        link
        English
        11 year ago

        The rules now are generally: bare metal if that’s all the box will do, or it’s main task, container if it’s one of many services, vm if it’s a larger application you might migrate and i/o isn’t your limitation.

        The line between container and vm is fuzzy, but bare metal means you’re making a design choice for that machine and if that or another application breaks the machine you’re screwed.

        In a way freebsd is amazing for this, you put all applications in jails and don’t use the main userspace much, but the virtualiztion story isn’t quite there yet.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        Beyond the other reasons, the idea is that you can keep the hypervisor (host OS) minimal and stable and so it’s less likely to break. Whereas you can break the VM to your heart’s content and you’ll still have access to the host and don’t need to rely on live USB etc to recover (or even video access since you can still access the host over the network). You can also migrate the VM to a new system more simply since it’s just a VM with simpler, abstracted hardware that is less likely to run into driver issues.

        One downside of a VM is that GPU passthrough/GVT-g isn’t always stable or performant (especially GVT-g), so you might want to keep some services on the host if they need GPU access (such as transcoding). You can still access the files in the VM over NFS/Samba from the host. Although I’d use containers (e.g. docker) at least, sticking with the principle of keeping the hypervisor OS as minimal as possible.

  • @nezbyte
    link
    English
    71 year ago

    OMV with SnapRaid+MergerFS in a Proxmox VM. I used an LSI card with PCI passthrough to the VM so it could see the drives. Nightly snapshots of the VMs are very convenient if you ever need to restore/migrate your install.

    • @reddit_suxOP
      link
      English
      41 year ago

      That’s a lot to learn.

      • @TBi
        link
        English
        31 year ago

        Things are easy or free. Rarely both. :) it’s well worth learning OMV. If you have any questions I’ll try to help!

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        At the moment I do exactly that. Learn proxmox, omv, influxDB and tomorrow grafana comes around to play 😉

        Nevertheless proxmox and omv are the difficult ones if you never used a hypervisor before. And my toughest lesson was: software raid is pretty slow. This took quiet some time to realise that this was the problem.

        But it is great to have a hypervisor to play around with, test different things in containers or vms and if you mess things up, just spin up another in a few seconds and try it again. It just feels less impactfull than reinstalling all stuff on one machine.

        And you learn a lot about networks along the way if you aren’t already familliar with it.

  • Shurimal
    link
    fedilink
    41 year ago

    Having faced the same situation, here’s my 2 cents:

    • OMV is the best solution for reusing/upcycling old consumer grade PC hardware. Your storage pool is easily expandable using MergerFS, you don’t need 16+ GB of RAM, and you certainly don’t need server-grade hardware. But you won’t have the bells and whistles the ZFS offers (yes, there is ZFS plugin, but at this point, why not just use TrueNAS?).
    • TrueNAS if you intend to build a “serious” storage server with many GB-s of ECC RAM, multi-Gbit networking and all that jazz. And if you have the budget to buy 5 or 6 large HDD-s at once to start out your storage pool with a single vdev using RAIDz1 or RAIDz2 (or buy 2 HDDs for a single mirrored vdev with a whopping 50% of all your current and future storage going to redundancy). As I understand it, ZFS expandability is in the works, but not production-ready yet—which makes ZFS less suitable for ad hoc grow-it-as-you-go storage solution.

    In the end, OMV won it out for me, the 10TB motley crew of various HDD-s has served me well and I can expand cheaply when my needs grow.

  • @MrGandalf
    link
    English
    41 year ago

    I have tried few of them but I highly recommend you to try UNRAID. It will introduce you to world of docker containers

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      I run UnRAID myself as well. I don’t need raw performance, just stupid amounts of storage for my data hoard, and being able to slap in whatever disk has the best $:gb whenever I need to expand has been such a nice change.

      I do have a SSD pool to run my containers and separate one for drive caching as well.

      • @peregus
        link
        English
        11 year ago

        Do you use SnapRaid?

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          Hadn’t heard of it, seems like it’s similar to Unraid, but parity has to be manually syncd whereas UnRAID syncs constantly?

          • @peregus
            link
            English
            11 year ago

            Kinda, but it can accept JBOD and you can add new disks of any size whenever you want without the need to reconstruct everything (things that I think that’s necessary with UnRAID, or am I wrong?)

            • @[email protected]
              link
              fedilink
              English
              11 year ago

              When adding/removing disks you do have to rebuild parity, but it’s not the end of the world and it handles it automatically. I run parity checks pretty often anyway (every other week) and it takes the same amount of time.

              I just like the Unraid UI and Docker implementation tbh. I used to run everything off Portainer/Compose on my QNAP which was great, but these days I am lazy and having an Update All button is 🤌

              • @peregus
                link
                English
                11 year ago

                Can you add disks of different sizes? With SnapRAID if I put a single disks in a different PC, the data is still readable, is this the same with UnRAID? Thanks for the info!

                • @[email protected]
                  link
                  fedilink
                  English
                  21 year ago

                  Yepp exact same as Unraid, I have 3x 18tbs one of which is Parity, and a couple old 4gbs I need to drop but am too lazy to do so 😅. Can add whatever size but the parity drive has to be at least the same size as the largest drive in the pool.

    • @reddit_suxOP
      link
      English
      21 year ago

      I was doing that but I have a Windows PC on the network and have fucked up Samba too many times to be confident to do it again.

  • @[email protected]
    link
    fedilink
    English
    -11 year ago

    I like OMV. It’s simple and to the point. TrueNAS is far too complicated and robust for basic home use, IMO. It’s like driving a tank to work. OMV does the job most people need. Nextcloud is cool but, again, a little to expansive for what I need. I’m not really going to use the included office tools or any of that. I just want remotely available storage. OMV + Tailscale + PiVPN means my backups and stored data are available anywhere, on any device including my phone. Nextcloud streamlines that availability but, again, just too much going on. TrueNAS is an enterprise product and feels like it. Not my cup of tea.