Hello everyone! During one of those illuminated evenings, I got the idea to move my small server in Scaleway to some more powerful server in Hetzner. If I will make the move, I am thinking of splitting the server in various VMs, to host different services that belongs to different trust boundaries, for example:
- A Lemmy/writefreely instance
- Vaultwarden/Gitea
- Wireguard tunnel to my home infrastructure
- Blogs, and other convenience services
In order to achieve the best level of separation, I was thinking of using VMs. My default choice would be Proxmox, because I used it in the past, and because I generally trust it, however I am trying to evaluate multiple options, and maybe someone has good or better experiences to share.
Other options I thought about are:
- Run everything in Docker. I am going to do this nevertheless, but Docker escapes are always possible, especially with public facing images that I did not write myself and/or that require a host volume.
- KVM directly? I am OK even without a GUI to be honest. I am not aware if there is some ansible module or even better Terraform provider for this, it would be great. (EDIT: I found https://registry.terraform.io/providers/dmacvicar/libvirt/0.7.1 which seems awesome!)
- ESxi? I have no experience with this solution.
Any idea or recommendation?
I’d go with Proxmox with a docker VM then you can always run other VMS or lxc containers if needed.
My server is running on proxmox, so it gets my vote as well!
jumping on the proxmox bandwagon. I run proxmox too, and it’s great. Aside from the occasional nag to get a premium licence, it’s completely free and open source.
Yeah, probably this is the way I will go, to be honest. I just wanted to bounce some ideas in case I was missing out on some other technology, and a few people mentioned some stacks in this threat which are pretty obscure to me, so nice to look into them and compare!
I second this. This is how I do it!
This is how I run my whole home set up. Pretty much everything is virtualized through proxmox with Debian VMs or LXCs. Also proxmox backup server is incredibly easy to set up and give you great piece of mind.
I use libvirt and never found a reason to switch to something else. Easy to script, easy to manage with the gui
Do you use just plain bash to script it? I saw that there is a Terraform provider and that looks actually interesting to me basically similar functionality to proxmox, but less software.
Not parent commenter, but I use ansible + plain bash scripts/virsh/XML definitions to manage my libvirt instances/“cluster”, it just works.
I have been running Proxmox on the side/at work, I like it as well but never took the time to dive in the API/automation side of things. libvirt is simpler but still powerful.
Oh right, there is the XML aspect that I didn’t consider.
I have to say that I very much have a preference for the declarative terraform strategy vs ansible, and I saw that the libvirt terraform provider is quite mature. I have seen that there are even some providers for proxmox (but less mature in my opinion), so it seems that either way the machine definition could be codified and automated. But the thing is, if the machines are all in Terraform code, basically there is no much use of proxmox (metrics are going to be in node exporter, maybe just backups and snapshots?).
Ansible can be declarative if you do it right, and take the time to write a few roles to manage your use case. For example my ansible libvirt config looks like this:
libvirt_vms: - name: front.example.org xml_file: '{{ playbook_dir }}/data/libvirt/front.example.org.xml' autostart: no - name: home.example.org xml_file: "{{ playbook_dir }}/data/libvirt/home.example.org.xml" state: running libvirt_port_forwards: - vm_name: front.example.org vm_ip: 10.10.10.225 vm_bridge: virbr1 dnat: - host_interface: eth0 host_port: 22225 # SSH vm_port: 22 - host_interface: eth0 host_port: 19225 # netdata vm_port: 19999 libvirt_networks: - name: home mac_address: "52:52:10:ae:0c:cd" forward_dev: "eth0" bridge_name: "virbr1" ip_address: "10.10.10.1" netmask: "255.255.255.0" autostart: yes state: active
This is the only config I ever touch since the role handles changing configuration, running/stopping VMs, networks, etc. transparently. For initial provisioning I have a shell script that wraps around
virsh/virt-install/virt-sysprep
to setup a new VM in ~1 minute (It uses a preseed file, which is similar to what cloud-init offers). This part could be better integrated with ansible. Terraform has other advanced features such as managing hosts on cloud providers, but I don’t need those at the moment. If I ever do, I think I would still use ansible to run terraform deployments [1]Edit: the libvirt role if you’re curious
Personally, after looking at what the industry wants; I would start my homelab trying to automate it with Ansible/Terraform.
libvirt
should be decent, and if you want to go over to BSD, I think ansible supportsbhyve
? If not,libvirt
definitely runs on BSD so you could just automate thatI work in security, so there is no really devops/sysadmin prospect for me. That said, I use ansible and (mostly) terraform professionally and for my lab, so that’s a good idea nevertheless. I don’t have much BSD experience, what do you think are the key reasons to go that route instead of Linux?
For me, it’s a personal decision. I find BSD more cohesive. That is subjective and has been debated for a decade now. I also find
bhyve
a bit easier to use, albeit the features are newer and more in number in KVM (for example:bhyve
until very recently didn’t have VirtIO drivers, so Windows machines would be useless on it).I’m interested in working in Security myself. Would you be able to tell me a little more about your work? Also, what role/path in security would you recommend for a Cloud admin/System Admin?
Would you be able to tell me a little more about your work? Also, what role/path in security would you recommend for a Cloud admin/System Admin?
Well, I started as an IT ops person, I got lucky before the first job was still in a fairly modern environment, and I got introduced to k8s, containers and linux administration (we were running k8s on baremetal). Slowly I moved more and more towards security, specifically infrastructure/platform security, which to be honest, is not too far from a regular Cloud/System admin. However, the big difference is in mindset and priorities, which slide from availability to mostly confidentiality and integrity. My job essentially consists on supporting the security of whatever Kubernetes cluster we run, both managed and on baremetal, with the usual spinkle of network security in the middle, and a strong focus in secure computation (i.e., container security). The actual work can range from research and experimentation, to concrete setup or development of new tooling, to developing standards and guidelines.
(Cloud) Security Engineering seems an obvious path for a cloud/system admin, and I don’t think it’s extremely hard to build the necessary security knowledge on top of a solid engineering background!
If you’re breaking into the industry, I’d say esxi. If it’s a hobby, then proxmox since you’re already familiar.
Proxmox has been great for me.
In the places where I’ve had to make similar decisions, I’ve used the need for ‘advanced’ features to make the call. If I’m looking for storage or networking redundancy, or I’ve been interested in running multiple hosts systems, or I’ve been looking to play with overlay networks, then I’ll grab Ovirt, Proxmox, VSphere, or Openstack (depending). When I just want something simple-ish, I just KVM / Podman on a Linux machine.
Good point, I don’t have any advanced use case, except maybe some slightly more complex network setup. Probably this is achievable with KVM too (and/or some firewall-fu). I would like to have fully IaC, so I don’t have to click through guis, so the availability of Terraform providers might be a dealbreaker (which I didn’t look yet for Proxmox, for example).
Go unortodox way. FreeBSD + CBSD + Bhive. One of the best free virtualisation stack there for single server.
I have some research to do, I have never heard of that!
If you’re looking at trying something different give XCP-NG a try. Its a fork of XenServer. Great piece of software. Nothing wrong with Proxmox either.
I will have a look! I know xenserver, but I have very little experience with it, I will check xcp-ng out!
Absolutely nothing wrong with proxmox, I am just exploring a bit (in fact, I did not look at terraform providers for it…)
Proxmox is nice, xcp-ng works I suppose, even though its a bit… niche thing 2TB disk limits though
Why rent a whole server? You can run a cloud VM at a fraction of the cost.
Yeah that is true, but at the same time I always felt a bit uncomfortable with using a VM which shares resources with who knows what else. I also like the idea of having for example one VM acting as VPN, firewall, rev proxy, while other VMs behind that do not have internet connection at all (inbound). It is somewhat achievable even with VPSs, but it’s more complex IMO.
I am conflicted though, and I did consider VPSs to be clear.
The resources are shared, sure, but there’s complete logical isolation. Your VM can’t see others, and they can’t see you (barring any exploit or misconfiguration, but that can happen with physical servers just as well).
Personally I have all my services running in separate containers in one VM. Same separation, just at a different level.
Well, hypervisor bugs are rare, but not so much. A physical server is fully isolated by other tenants of the provider (or better, I can achieve that full isolation with network configuration).
Personally I have all my services running in separate containers in one VM. Same separation, just at a different level.
I will definitely anyway run all the services in containers, but I am fully aware that containers don’t provide much isolation, especially once you start using the host network to serve native port (i.e., containerized nginx/haproxy) or mounting filesystem volumes inside them. To be honest, in my current setup, where I am the only user of both the machine and the services (made exception for a few family members), I am OK with this separation. However, if I run a lemmy/writefreely/fedisoftware instance, which is going to host other untrusted users, I am not happy if on the same box my git server is running, or my password manager. That’s mostly the reason why I was looking for full separation. I guess separate VPSs would also work, though.
LXD. Light and really easy to create and manage LXCs and VMs. The network management is amazing… Just try it