I’m thinking about making some changes to my home server to make it a little more robust and let me do some cool new things with it (like actually trust it for backing up data to with NextCloud, replicating VMs or data across sites, etc). I’m just looking for any advice people might have for this process to migrate hypervisors.

What I currently have:

  • Windows 10 Pro OS with Hyper-V
  • Running some applications on the host OS (Plex/PRTG/Sonarr/Radarr)
  • Running a few VMs for things I set up after I realized “I should be doing these in VMs…”
  • 4 HDDs for data, each just mounted individually. 2 for TV, 1 for Movies, 1 for Backups

What I’d like to have:

  • Better OS for running the hypervisor (Proxmox is what I’m reading may be best, but I’m open to suggestions)
  • Nothing running on the host OS other than a hypervisor
  • All my services running virtualized, be that via Docker in a LXC or a guest OS.
  • My Drives all in a RAID 5. Planning to add more drives at some point as well.

My thoughts on the process are that the “easiest” way may be:

  1. Just throw a new OS drive in to install Proxmox on
  2. Export my VMs from Hyper-V and import them into Proxmox
  3. Set up the services I had running on the host OS previously in their own VMs/containers
  4. Make a new RAID either: a. with new disks or b. by combining data from my existing disks so I can get a free few disks to start the RAID with, then moving data into the RAID and clearing out more disks to then add to the RAID, rinse and repeat until done (that’s a lot of data moving I’d like to avoid…)

I wasn’t sure if it would be a smarter idea to do something more like this though (assuming this is all possible, I’m not even sure that it all is). If this is possible, it might reduce my downtime and make it so I can tackle this in bits at a time instead of having an outage the entire time and feeling like I need to rush to get it all done:

  1. New OS drive for Proxmox
  2. Use Proxmox to boot my Windows 10 drive (this I’m not sure about) so that everything continues as it’s currently set up.
  3. Slowly migrate my services out of the Windows 10-hosted VMs and host-installed services
  4. I probably still have to deal with the RAID the way I mentioned above

Is there any other method I’m just totally not thinking of? Any tips/tricks for migrating those Hyper-V VMs? That part seems straightforward enough, but looking for any gotchas.

The reason I haven’t done anything yet is because I only have so much time in the day, and I’m not trying to dedicate an entire weekend to this migration all at once. If I could split up the tasks it’d make it easier to do, obviously there are some parts that would be time-consuming.

Thanks in advance!

      • Bakkoda
        link
        fedilink
        English
        11 year ago

        The wiki is all I used to move. I exported to vhdx, qemu-img convert to qcow2, qm importdisk into some premade VMs. As for VM settings they were all Debian VMs from an old HyperV install.

      • krolden
        link
        fedilink
        English
        11 year ago

        The proxmox wiki and forums are very good resources at your disposal. I highly suggest reading as much as you can. Proxmox is great.

  • @[email protected]
    link
    fedilink
    English
    71 year ago

    Don’t run docker directly on your host. It likes to fuck with your IP tables in ways that make VM networking break. Create a single VM to be your docker host and then put all the containers on there (KVM - which is what Proxmox is under the hood - offers near-native CPU and memory performance anyway).

    When building your VM environments I’d suggest going with XFS for your file system. At my work we recently switched over our containerised apps from EXT4 to XFS and got a 4-6 times improvement in IOPs as a result.

    • krolden
      link
      fedilink
      English
      21 year ago

      I would love to see proxmox add podman to the host OS and webui in the future.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        1 year ago

        They are pretty opinionated about LXC and opposed strongly to direct docker intergration, so I wouldn’t expect it.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      I think exactly the opposite, in 5 years i never had any problem with IP tables (although i assign a manually created network to each docker).

      One of the biggest advantage of docker is that it can access unlimited memory and storage from host, without reserve a specific amount of both. If now it used 200mb, it doesn’t waste the 8gb you would assign to a VM. And if tomorrow it needs 10gb, it can access the extra unused RAM.

      • @[email protected]
        link
        fedilink
        English
        11 year ago

        Docker will block networking to KVM VM’s if they are configured with a network bridge, aka giving the VM’s their own LAN IP address. I had so much trouble I switched to podman. Also, you’ve been working with containers for 5 years and call them dockers? Not the strongest ethos based claim.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          Never happened to me

          I’m not saying it’s impossible but it’s not a common issue that happens by default

          • @[email protected]
            link
            fedilink
            English
            11 year ago

            This whole post is in the context of virtual machines, and the claim you felt exactly opposite about was that docker will mess up VM networking. If you don’t use a network bridge with KVM that’s fair but I’m not sure why you would disagree with advice to a problematic situation only because it does not apply to you.

            • @[email protected]
              link
              fedilink
              English
              11 year ago

              i use a network bridge with KVM and i have containers with no issues… It worked fine at first install and I don’t think i am the only one in the world.

  • @[email protected]
    link
    fedilink
    English
    61 year ago

    Personally I tried proxmox and ended up switching back to a basic debian with qemu.

    Proxmox is certainly easier, but they make some weird decisions like not supporting shared filesystem’s (albiet theyre a bit ghetto, and they probably don’t want to deal with those issues on an enterprise front)

    You can even use Broadway in gtk3 to have an accessible web interface, for something like virt-manager.

    LXD has a built in web UI as well

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      Did the exact same, proxmox has obnoxious kernel restrictions and I like to roll my own.

      Also video acceleration is weak on proxmox, but otherwise it’s solid.

      Scripted my lxc and that was the last major win for proxmox.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        Thinking of whipping up my own Godot based webui/application for VM and LXD management, anyone hmu if you want updates or would be interested

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          Sure, don’t know Godot, mostly do my work in scripting or python, but I might be able to help.

          Nothing like a quick ui to make this stuff easier.

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            Exactly my thoughts, my ideal is to eventually hook up some type of node system for easy scripting for just simple shit like starting/stopping VMS

            I’ll send you a ping if I get anywhere interesting

    • @[email protected]OP
      link
      fedilink
      English
      11 year ago

      How hard is replication across servers with just debian and qemu? I’m honestly not super great on linux knowledge. I’m a Windows sysadmin by trade, with maybe 10-20% linux. I run a few Ubuntu server VMs at home and some RHEL at work. So I’m looking for something as easy to set up and well-documented and supported as possible. Proxmox just seemed like the “industry standard” for selfhosting, but I was also looking at Unraid (which is supposedly better at storage and less good at virtualization) or even ESXI, but I didn’t want to get into the VMWare payment bubble if I needed anything more than a simple host.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        Ngl proxmox would be your best bet then, especially for reliability and being able to forget about most of the linux background.

        Qemu is what proxmox uses behind the scenes, but puts an easy to use webUI in front of everything, minus a couple less used/less stable features some power users like

  • Anony Moose
    link
    fedilink
    English
    21 year ago

    Noob question. What kind of apps are you (or other self-hosters) running in VMs? With containerized apps, it seems like VMs would be largely unnecessary, unless you need some particular device to be virtualized? Or am I misunderstanding?

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      I can’t speak for everyone else, but I run about 6 different VMs solely to run different docker containers. They’re split out by use case, so super critical stuff on one VM, *arr stuff on another, etc. I did this so my tinkering didn’t take down Jellyfin and other services for my wife and kids.

      Beyond that I also have two VMs for virtualized pihole running gravity sync on different hosts, and another I intend to use for virtualized opnsense.

      Everything is managed via ansible with each docker project in its own forgejo repo.

      • Anony Moose
        link
        fedilink
        English
        11 year ago

        So in addition to docker containers, you use separate VMs for isolation? Or are you just referring to the docker containers themselves?

        I have separate docker containers for different apps too, but no virtualization beyond that.

        • @[email protected]
          link
          fedilink
          English
          21 year ago

          Yeah I use different VMs to separate out the different containers into arbitrary groups I decided on.

          I run my docker containers inside different Debian VMs that are on a couple different Proxmox hosts.

          • Anony Moose
            link
            fedilink
            English
            11 year ago

            Interesting. Seems like it might be worth it for me to look into Ansible and perhaps even Proxmox on the new SFFPC I’m building as a media center for stability/reliability.

            Thanks!

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    I would start by moving the services running on the host to a VM, less downtime for those when switching to proxmox.

    Also, if possible, address the data issue before migrating. If you can add more disks, you could setup a new zfs pool, ready to be used by proxmox.

    And don’t forget to backup (to external storage), you never know what could go wrong.

    • @[email protected]OP
      link
      fedilink
      English
      21 year ago

      Thanks, I wasn’t sure about the data being taken care of first or last. First makes a lot more sense though. And prepping as many services into VMs ahead of time definitely sounds like it’ll be the best way to reduce downtime, even if I then end up moving it into a different container again later on.