I’m running three servers: one for home automation/NVR, one for NAS/media services, and one for network/firewall services.

Does this breakdown look doable based on the hardware? Should the services be ditributed differently for better efficiency?

Server 1 and 3 are already up and running. I just received my NAS, and am trying to decide where to run each service to best take advantage of my hardware.

I’m also considering UnRaid instead of Proxmox for a NAS OS. I just chose Proxmox because I’m familiar with it, and I like the ability to snapshot. I also intend to run Proxmox Backup Server offsite at some point, and I like the PVE/PBS integration.

Any advice would be much appreciated!

  • @[email protected]
    link
    fedilink
    English
    19
    edit-2
    5 days ago

    Just remember the KISS principal: Keep It Simple, Stupid

    Keep the NAS as a NAS, and I would honestly trim down everything else into a clustered hypervisor setup (like Proxmox) with dedicated VMs to run each stack. That way if you need to take a machine down for whatever reason, you can migrate its VMs/containers to another machine, with minimal downtime, so you can do whatever it is you need to do with said machine.

    Full disclosure: this is what I do. I was in your shoes before.

    • Possibly linux
      link
      fedilink
      English
      44 days ago

      I wouldn’t do that unless you have lots of money to blow on crazy hardware. Running separate virtual machines is very inefficient. Instead, run a few virtual machines with a few services in each. I would separate it out into classes based on the load and use case.

      • @AustralianSimon
        link
        English
        24 days ago

        Or just run them in containers and skip the need to run the VMs at all. You can do snapshots with Debian fine.

        • Possibly linux
          link
          fedilink
          English
          24 days ago

          I don’t like LXC personally. It seems that most of the community disagrees but for me it has been nothing but pain.

          • @AustralianSimon
            link
            English
            1
            edit-2
            4 days ago

            Might be the population on lemmy but elsewhere docker or podman are way more common. K8 in Enterprise.

            • Possibly linux
              link
              fedilink
              English
              14 days ago

              Proxmox doesn’t have native support for any of those. Honestly I think it would be cool for Proxmox to switch to some form of Podman.

              • @AustralianSimon
                link
                English
                14 days ago

                That would be a smart move on their behalf I think.

      • @[email protected]
        link
        fedilink
        English
        14 days ago

        Instead, run a few virtual machines with a few services in each.

        That’s what I meant, I guess it wasn’t very clear. When I say “stack”, I mean multiple services.

  • Possibly linux
    link
    fedilink
    English
    84 days ago

    I personally would avoid LXC. That seems to be a hot take but in my experience it is better to run docker/podman in a few VMs.

    • @bostondrivingisworseOP
      link
      English
      34 days ago

      …really? I run most of my services in an LXC, and have for a while without issue.

      • Possibly linux
        link
        fedilink
        English
        24 days ago

        Maybe I’m doing it wrong then. I run LXC but has always been a much worse experience. Boot times are terrible and the controls that work for VMs don’t work as well for LXC. You also can live transfer which is problematic for me.

        • @ikidd
          link
          English
          23 days ago

          I think you’re doing it wrong. LXCs boot almost instantaneously on a hypervisor since they hijack the host kernel, I’d be surprised if my CTs take 5 seconds.

          I would agree on the live migration issue but I guess you pick your services accordingly. I have a VM that runs docker and a LXC docker host, and I pick my containers for each accordingly.

          • Possibly linux
            link
            fedilink
            English
            1
            edit-2
            3 days ago

            How on earth are you getting 5 second boot time with LXC? My containers take around 10 minutes to boot while VMs take a few seconds. Also LXC networking seems to break randomly.

            Edit: I went back and figured it out. It was that IPv6 was set to dhcp in Proxmox which caused everything to halt until timeout. I set it to static in Proxmox and now it boots instantly

            • @ikidd
              link
              English
              13 days ago

              I have no idea what you have going on, I’ve never seen LXCs take that long, even if I include the time it takes to down the containers and bring them up after a reboot.

              What are you using for running them? I just tested my docker LXC and it took 16 seconds from when I typed “reboot” to having a login prompt. And that’s on an ancient R410 server running proxmox.

                • @ikidd
                  link
                  English
                  1
                  edit-2
                  2 days ago

                  Yes, RAID 10 ZFS with no ARC, 6GB SAS drives.

    • @[email protected]
      link
      fedilink
      English
      14 days ago

      Not everything plays nice in Docker, and there are plenty of those services that also don’t need a full VM to operate. LXC is great for those edge cases. Otherwise I agree, a few VMs for various Docker stacks is the way to go.

      • @AustralianSimon
        link
        English
        14 days ago

        All the services OP has listed run great in docker, excluding Frigate (not tested personally).

        • @[email protected]
          link
          fedilink
          English
          14 days ago

          True, I noticed that as well. Still, it’s worth moving bare-metal docker installations to VMs. Easier to manage IMO.

  • @[email protected]
    link
    fedilink
    English
    54 days ago

    I’m also considering UnRaid instead of Proxmox for a NAS OS.

    NAS just has no meaning anymore?

    • @[email protected]
      link
      fedilink
      English
      13 days ago

      Storage appliance/service offering storage over the network to other clients.
      Dunno what else you want or define under the ‘NAS’-term.

      • @[email protected]
        link
        fedilink
        English
        13 days ago

        Let’s be honest - in this community NAS means “my do-everything server that may have a RAID and probably also shares storage”.

  • @AustralianSimon
    link
    English
    2
    edit-2
    4 days ago

    Personally I would keep it simple and just run a separate NAS and run all your services in containers across the devices best suited to them. The i3 is not going to manage for Jellyfin while sharing those other services. I tried running it on an N100 and had to move it to a beefier machine(i5). Immich for example will use a lot of resources when peforming operations, just a warning.

    If you mount a NAS storage for hosting the container data, you can move them between machines with minimal issues. Just make sure you run services using a docker-compose for them and keep them on the NAS.

    You completely negate the need for VMs and their overhead, can still snapshot the machine if you run debian as the OS there is timeshift. Other distros have similar.

    • @[email protected]
      link
      fedilink
      English
      23 days ago

      quicksync should let the i3 handle jellyfin just fine if you’re not going beyond 1080p for a couple of concurrent users. Especially if you configure the Nice values to prefer jellyfin over immich.

      I’m not aware of the platform for the n300 because it might be worth the initial setup, and have some room to upgrade the CPU later if it causes trouble.

      If OP is going for multiple systems, I’d definitely agree on making one of them a pure NAS and let a more upgradable system run the chunky stuff.

      • @AustralianSimon
        link
        English
        13 days ago

        quicksync should let the i3 handle jellyfin just fine if you’re not going beyond 1080p for a couple of concurrent users. Especially if you configure the Nice values to prefer jellyfin over immich.

        Most of my content is 4K h264. You may be right on the 1080 but I don’t have content at that resolution generally.

        Worst case scenario he can always keep the N300 for other stuff if it doesn’t work out.

    • @[email protected]
      link
      fedilink
      English
      13 days ago

      Had issues with docker conpose mounting NFS storage.
      Seemed like it got disconnected while in use by the container.
      Did resolve it eventually by manually mounting it on the host.

      Any experience why?
      Host OS: Debian 12
      NFS server: Debian 12 in a Proxmox VM

    • @ikidd
      link
      English
      03 days ago

      The advantages you gain with running a hypervisor on something like ZFS is immeasurable, for snapshotting, replication, snapshot backups and high availability. You don’t have to quiese machines to back them up and you can do instant COW snapshots before upgrades.

      KVM doesn’t really have overhead, that’s the kernel part. Maybe a bit of RAM, but with LXCs it’s negligible.

      • @AustralianSimon
        link
        English
        13 days ago

        I didn’t think OP was going the ZFS route so it wouldn’t matter on that point.

        His Server 2 will be running on the red line imho so any overhead would have impact.

  • @[email protected]
    link
    fedilink
    English
    5
    edit-2
    5 days ago

    Does your storage include any kind of RAID? If not then that’s something I’d personally add in to the mix to avoid interruptions for the service. Also 32 gig of RAM is not much, so don’t use ZFS on proxmox, it eats up your memory and if you run out everything is stupidly slow (personal experience speaking here, my proxmox server has 32gig as well).

    Also, that’s quite a lot of stuff to maintain, but you do you. Personally I would not like that big stack to maintain for my everyday needs, but I have wife, kids, kids hobbies and a ton of other stuff going on so I have barely enough personal capacity to run my own proxmox, pihole, immich and HomeAssistant and none of those are in perfect condition. Specially the HA setup badly needs some TLC.

    And then there’s the obvious. Personal mail server on a home grade uplink is a beast of it’s own to manage and if you really don’t know what you’re getting into I’d recommend against it. And I’m advocating every mail server which is not owned by alphabet/microsoft/apple/etc. It’s just a complicated thing to do right and email is quite essential thing for everyday life today, so be aware. If you know what’s coming up (or are willing to eat up the mistakes and learn from them) then by all means, go for it. If not, then I’d suggest paying for someone to make it happen.

    And then the backups. I’ve made the mistake few times where I thought it’d be fine to set up backups at some point in the future. And that has bit me in the rear. You either have backups on the pipeline coming Very Soon™ or you lose your data. And even if it’s coming Very Soon, you’ll still risk losing your data.

    Plus with backups, if you don’t test recovery from them then you don’t have backups. Altough for a home gamer it’s often a bit much to ask for a blank slate recovery, so at least I’ve settled on the scenario where I know for sure I can recover from any disaster happening in the home lab without testing as I don’t have enough spare hardware to run that test fully.

    Beyond that, just have fun. Recently I ran into an issue where my proxmox server needed some hardware maintenance/changes and that took my pihole-server down, so whole LAN was out of DNS services. No tthe end of the world for me, but a problem anyways and I’ve been planning for a remedy against that, but haven’t yet done anyting concrete for it.

    • @bostondrivingisworseOP
      link
      English
      15 days ago

      Thanks for the advice! I have local backups, but in looking to use more self-hosted services, I want an offsite backup in case, say, my house burns down. That way I don’t lose all my photos, etc.

      I’ve decided to forego a mail server– you’re not the first to tell me it’s very difficult to maintain and setup properly, ha!

  • @[email protected]
    link
    fedilink
    English
    25 days ago

    I’m interested in why you chose the i5 for the automation, rather than the video server?

    I’m no expert, but things like transcoding (or even just re-encoding) take a lot of grunt, which it seems the i5 would be good for.

    The i3 would be good for more constant, lower power tasks like automation.

    At least, that’s my thoughts, happy to be shown your reasoning…

    • @AustralianSimon
      link
      English
      1
      edit-2
      4 days ago

      I ran Jellyfin on a N100 for a while and it just couldn’t cope despite being by itself on the machine. I mostly watch 4K h264 encoded stuff. I ended up putting it on a machine with an i5.

    • @bostondrivingisworseOP
      link
      English
      24 days ago

      I had the i5 prior to getting a NAS, and use it for Frigate. The i3 is just what came with the NAS box.

  • hendrik
    link
    fedilink
    English
    2
    edit-2
    5 days ago

    I think a lot comes down to usage. It just depends whether you connect 1 camera to Frigate, or 6. And if you enable some AI features. Whether you download a lot of TV series or a few and delete old stuff. Or use ZFS or other demanding things. I personally like to keep the amount of servers low. So I probably wouldn’t buy server 2 and try to run those services on 1 as well. I’m not sure. You did a good job seperating the stuff. And I think you got some good advice already. I’d add more harddisks, 6TB wouldn’t do it for me. And some space for backups. But you can always keep an eye on actual resource usage and just buy RAM and harddisks as needed. As long as your servers have some slots left for future upgrades. But I think you already got way more servers and RAM than you’d need. I probably run half of those services on a smaller server.

    • @bostondrivingisworseOP
      link
      English
      25 days ago

      Thanks so much for the response! Server 1 has been running strong for quite a while with six cameras on Frigate and very little CPU usage. I do have a ZFS pool on Server 1– this is the first I’m hearing that it requires more resource overhead… Could you elaborate a bit?

      6TB is just to start, and I fully intend to upgrade both RAM and storage as I need it.

      • @WhatAmLemmy
        link
        English
        25 days ago

        ZFS is an enterprise software RAID, and 1:1 RAM to TB is the minimum recommended requirement for a production server (e.g. enterprise implementations).

        I’ve seen many users stating they have far far less than 1:1 without issues. I recall a r/DataHoarder user saying they have 100+ TB’s and only 16 or 32GB RAM, which is not fully utilized, so it all depends on your usage profile and the size/scale of r/w ops occurring during peak periods.

  • @[email protected]
    link
    fedilink
    English
    15 days ago

    I would put truenas on the NAS, also put a VM on truenas with 16-24G of RAM.

    Create a kubernetes or docker swarm cluster with server 1 and the nas vm and just have everything as containers. This way you just have one resource pool, and the containers will be started wherever there are enough resources available. The containers will mount NFS shares from truenas which truenas will create automatically as ZFS datasets. ZFS supports snapshots.