Hey all,

Just wondering what the consensus is on hosting directly on a computer versus virtualization? Right now I’m hosting my Lemmy instance on a Hetzner VPS, but I would like to eventually migrate to my Linux box at home. It currently runs as a media PC (Ubuntu) in the living room, but I always intended to self-host other software on the side since it should be more than capable of doing both (Ryzen 5600G, 16gb DDR4).

I’m just torn though - should I host a virtual machine on it for Lemmy, or run it directly on Ubuntu as-is? I plan to do some further self-hosting projects as well later down the line.

  • @sv1sjp
    link
    English
    81 year ago

    Personally, I would prefer docker containers as I can move them to a new server or even create backups very easily.

    • @[email protected]
      link
      fedilink
      English
      51 year ago

      Backups are easily done with virtual machines as well. Taking, moving and restoring such backups is in fact much easier than moving docker containers between hosts as you don’t have to differentiate between volumes and locally mounted directories for example. That being said, depending on the use case, containers can be a nice and lightweight solution to separate applications on a userspace level

      • @sv1sjp
        link
        English
        -11 year ago

        Yes they are, but still, you have to backup an entire OS too except the applications/services you want!

    • @callcc
      link
      English
      21 year ago

      Docker is convenient but not really a solution if you care about free software. It’s as in-transparent as a windows installer. Not really of course but it drags the FLOSS community in that direction. I think we should work on making distributions better at installing and maintaining complex web applications (including their databases etc). Why can’t I just install lemmy or whatever package in a standard debian? Why is state scattered around? Why is it tough to have several versions of interpreters or libraries installed side by side? These are questions we should ask the people making distributions. Maybe GUIX or Nixos style distros are the answer to these questions.

      All in all, docker isn’t really about isolation or containers, it’s about having a convenient but in-transparent abstraction for a package and their state.

  • @0110010001100010
    link
    English
    81 year ago

    Run everything in docker containers. Much easier to manage than virtual machines and lighter on resources.

  • @[email protected]
    link
    fedilink
    English
    61 year ago

    I feel like putting it in a VM is probably overkill. I just have everything running in Docker containers and it’s pretty good like that.

  • z3bra
    link
    fedilink
    English
    51 year ago

    Using your home server as a VM host will put more load onto it. The idea of creating one VM per software (eg. Lemmy, a web server, and IRC bouncer, etc…) is a good idea in terms of “replaceability”, because you can easily upgrade OS, or restore a service to an earlier point in time, or redistribute resources, etc… The main downside is that you’ll add a big overhead to each service resource, as a VM require the full OS to boot. So if you run 5 services in 5 VM, the OS will consume 6 times more RAM, disk and CPU, on idling.

    If you can afford that, then do it ! It makes for a better separation of concerns and is definitely easier to maintain on the long run.

    Otherwise, you have two solutions. The first one is using containers. It’s just like the previous method, except that it won’t boot the full OS, and just start your service, using the already running kernel. Many self hosters use this method, so it’s well documented and very likely that you’ll find a container “package” for any service you might want to run. Note however that it adds another layer of complexity in terms of resource sharing (especially the networking part) that you’ll have to understand in order to secure properly.

    The last method is hosting your services directly on bare metal. This is the simplest method IMO, but it will get tricky when the software you need is not packaged for your distribution, as you’ll need to handle the service script and upgrades manually. This is however the lightest method regarding resource usage, as there is absolutely no overhead to running a service. And depending on the OS you’ll be using (in your case Ubuntu as it seems), you will benefit of a containerization like resource control thanks to systemd and cgroups (other mechanisms exists to control resources but this one’s fine).

    My personal preference goes to the last one (running on bare metal), but that’s also because I run OpenBSD as my main OS, which is definitely well suited for this method. I also do not like the docker ecosystem as I think it trades too mhm security for usability (but that’s a very personal take).

    In any case, happy self hosting my friend !

  • @Cobrachicken
    link
    English
    41 year ago

    Imho any form of containerization/virtualization is better than a dedicated machine, for server duties - if only for (in my case) easy backup and rollback in case of human error.

    Going with Proxmox VMs atm, and slowly wrapping my head around LXC on a separate “playgound” Proxmox machine.

  • @beigegull
    link
    English
    41 year ago

    User accounts are a reasonable isolation mechanism for reasonably trustworthy server software.

  • @PriorProject
    link
    English
    41 year ago

    I run everything in docker compose and the two wins I feel like I get out of doing so are:

    • State in well-identified volumes. I’ve run multiple services on shared bare metal, and multiple services in isolated VMs. In both cases, I was equally terrified of upgrades and migrations because I frequently had no idea where in the filesystem critical data was stored. I would often have some main data dir I knew about, but little confidence that other important data wasn’t stored elsewhere. With docker volumes, if I can restart a container without losing data I can migrate to a new host without losing data just by copying my volume dirs. And I restart containers frequently so I always have high confidence in my ability to backup, upgrade, and migrate.
    • Library and os-userspace isolation. This is the thing docker is known for, I never worry that upgrading a lib for app-a is going to break app-b. This is really a secondary benefit for me, though. I rarely had this problem even on shared metal with many apps, and never experienced it in isolated VMs. For me it’s a secondary benefit to the nice volume management tools.

    Docker is in no way unique in its ability to provide state management, but ephemeral container enforce good state management, and the docker tools for doing it provide a very nice experience, so it is my preference.

  • @callcc
    link
    English
    41 year ago

    I’m not sure there is a consensu here, it’s a trade off in many dimensions: security, simplicity, performance, independence, ease of management etc.

    Bare metal gives you best performance, good simplicity since there are the least moving parts and the least amount of code deployed. It is fairly easy to manage if you know your distro and service well. You might run into trouble if you want to run non packaged software and compile things on your own.

    Containers are pretty efficient and simple to use because a lot of software comes pre-packaged (usually including a basic OS, w/o kernel). They are meant to only have one service per container, which can lead to quite some complexity in inter-connecting them. Security wise, they are less well isolated than VMs between each other and to the host. Operating on their persisted state can be very easy and lightweight. One drawback is that people often don’t understand whats happening inside (cf. Reproducibility). The saying goes: As with smoothies, you immeadiately know if you like it (a container) but you don’t know what’s inside.

    VMs are slightly less efficient but have a larger management overhead, ie. You need to configure and upgrade each OS (users, firewall rules etc) on its own, on the other hand you gain flexibility by being able to install different distros or having different versions around. You can also more easily manage the state of the machines (backups, snapshots etc). Security wise, VMs are pretty well isolated from each others and the hypervisor, often the configuration of the machines is the weak point (firewall, sshd or other services).

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    You want so isolate the things you host from one another (security, making updates easier etc). So if you host just one thing you can do so on the host directly. If you host multiple services you may seek some separation method.

    VMs is one method, but it wastes a lot of resources, especially RAM. A more elegant way is containers. Both the docker/podman route as well as the LXC way are quite polular.

  • Qazwsxedcrfv000
    link
    fedilink
    English
    21 year ago

    It does not hurt to create a VM or LXC for it. The overhead is not prohibitive. If things go astray, you can just kill the VM or LXC and restart from sketch. Though setup and management can be an issue as you have to install at least some software for VM management in your media PC.

  • acqrs
    link
    fedilink
    English
    21 year ago

    I honestly don’t think it matters much. I run my instance on a 2 cpu vm on hetzner, and I’ve noticed no negative impacts - it’s plenty snappy and has plenty of database space and all that. It’s pretty lightweight for a one person instance.

    If you’re going for a bigger instance, it still doesn’t matter much as long as your instance is capable of handling the load.

  • @BrianTheeBiscuiteer
    link
    English
    21 year ago

    In any case I wouldn’t host on a machine I use daily (i.e. a personal computer). Worst case is an attacker wiping your machine or encrypting everything and holding your data hostage. If that happened to one of my servers I’d just shrug and reimage.

    If performance is paramount or you have a low resource device like a rPi then containers are fine. If you have something with decent processing power I say go the extra mile and setup a simple VM using Qemu. A few distros offer cloud images so you don’t have to install from an ISO.

  • Oliver Lowe
    link
    fedilink
    English
    21 year ago

    One thing I think about is isolation. Do you want/need to strongly isolate the software and its data from the host operating system?

    • Anarch157a
      link
      fedilink
      11 year ago

      @otl

      @rozno

      Isolation make administration easier. In case of problems, you don’t need to worry how one application affects everything else, VMs and containers also gives you the ability to limit how much resources (CPU, memory, IO, etc.) each app uses.

      Back in the day, this was the reason why companies had different physical servers for each application, even if a single server could, in theory, run all of them. Now we can do it with isolation techniques.

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    Until you’ve given it a go it’s hard to recognise just how much a containerised solution simplifies things. There is a bit of a learning curve to get your first few deployments done, but once you have it all set up it’s like magic, you can test other software out quickly and tear it down easily if you don’t like it, and you can update most software on your host without worrying about breaking compatibility or stopping any users from using your services. Then when you chain in an autoupdate system like watchtower it’s even more magical!