I saw this post today on Reddit and was curious to see if views are similar here as they are there.

  1. What are the best benefits of self-hosting?
  2. What do you wish you would have known as a beginner starting out?
  3. What resources do you know of to help a non-computer-scientist/engineer get started in self-hosting?
  • @[email protected]
    link
    fedilink
    English
    611 month ago

    The big thing for #2 would be to seperate out what you actually need vs what people keep recommending.

    General guidance is useful, but there’s a lot of ‘You need ZFS!’ and ‘You should use K8s!’ and ‘Use X software!’

    My life got immensely easier when I figured out I did not need any features ZFS brought to the table, and I did not need any of the features K8s brought to the table, and that less is absolutely more. I ended up doing MergerFS with a proper offsite backup method because, well, it’s shockingly low-complexity.

    And I ended up doing Docker with a bunch of compose files and bind mounts, because it’s shockingly low-complexity. And it’s just running on Debian, instead of some OS that has a couple of layers of additional software to make things “easier” because, again, it’s low-complexity.

    I can re-deploy the entire stack on new hardware in about ~10 minutes (I’ve tested this a few times just to make sure my backup scripts work), and there’s basically zero vendor tie-in or dependencies that you’d have to get working first since it’s just a pile of tarballs and packages from the distro’s package manager on, well, ANY distro.

    • Last
      link
      fedilink
      English
      81 month ago

      My life got immensely easier when I figured out I did not need any features ZFS brought to the table, and I did not need any of the features K8s brought to the table, and that less is absolutely more.

      Same here. Sometimes I get carried away, but overall, a very basic setup is more than fine. Nearly all of my devices run Ubuntu/Debian, and only the work-related stuff gets over-engineered.

      It’s helpful for me to have something like a home lab where I can get hands-on experience with many different technologies. I’ve worn many hats, from developer to sysadmin, so a certain segment of my network tends to be built like Fort Knox. However, overall, 90% of my installs are minimalist with common best practices applied.

      • @[email protected]
        link
        fedilink
        English
        51 month ago

        IMO a homelab for learning and a server that you’re self-hosting services on really aren’t the same thing and maybe shouldn’t be treated that way, if you can swing it.

        I’d rather my password manager or jellyfin or my peertube instance or whatever not be relying on a tech stack I don’t entirely understand and might not be able to easily fix if it breaks.

        I guess a lot of it is new to doing this vs greybeard split, since the longer I’ve done sysadmin work the less I care about the cool new thing and have a preference for the old, stable, documented, bugfixed, supported, and with a clear roadmap software.

        I should probably get a job doing sysadmin work for a bank, lmao.

        • Last
          link
          fedilink
          English
          3
          edit-2
          1 month ago

          If they’re a beginner, what better way is there to learn? My home lab and their Windows laptop running VirtualBox are two different things. The topic of security is too deep to cover now, but if they don’t open it up to the world, there shouldn’t be much risk. Local access only should be safe enough, and they might try a dozen different services before settling on one—or none at all.

          Edit: Sysadmin is boring, I need to create. DevOps or some other automation role would be perfect IMO

          • @[email protected]
            link
            fedilink
            English
            21 month ago

            This is going to be a bit of my grumpy-greybeard, but again: if you’re learning, then something like Docker and docker-compose is much simpler and less prone to fuckups than a bunch of K8s.

            If you don’t know ANYTHING about what you’re doing, starting with the simplest tools and then deciding if you want to learn the more complicated ones is probably a less insane path than jumping right into the configuration-as-code DevOps pipeline.

            And, at that point, you should have your “production” and “testing” environments set up in such a way they won’t eat each other when you do an oops.

            • Last
              link
              fedilink
              English
              1
              edit-2
              1 month ago

              Oh ok, we’re talking about two very different things then. That’s a very strong opinion for a simple question. I understand what you mean a little better now. Docker is better, but Windows has some weirdness going on with Docker Desktop last time I tried using it. WSL + Docker might be even better to avoid the VM stuff altogether

    • @ChapulinColorado
      link
      English
      41 month ago

      I have made that migration myself going from a Raspberry PI 4 to a n100 based NAS. It was 10 minutes for the software stack as you said This not taking into account media migration which was done on the background over a few hours on WiFi (I had everything on an external hard drive at the time).

      That last part is the only thing I would change about my self hosting solution. Yes, the NAS has a nice form factor, is power efficient and has so far been very optimal for my needs (no lag like rpi4), however I have seen they don’t really sell motherboard or parts to repair them. They want you to replace it with another one. Reason 2 on the same is vendor lock in. Depending on the options you select when creating the storage groups/pools (whatever they are called), you could be stuck needing to get something from the same vendor to read your data if the device stops working but the disks are salvageable. Reason 3 is they’ve had security incidents so a lot of the “features” I would not recommend using ever to avoid exposing your data to ransomware over the internet. I don’t trust their competitors either. I know how commercial software is made with the smallest amount of care for security best practices.

      • @[email protected]
        link
        fedilink
        English
        31 month ago

        Yeah, I just use plain boring desktop hardware. (Oh no! I’m experiencing data corruption due to the lack of ECC!) It’s cheap, it’s available, it’s trivial to upgrade and expand, and there’s very few little gotchas in there: you get pretty much exactly what it looks like you get.

        Also nice is that that you can have a Ship of Theseus NAS by upgrading what needs upgrading as you go along and aren’t tied into entire platform swaps unless it makes sense - my last big rebuild was 3 years ago, but this is basically a 10 year old NAS at this point.

    • @[email protected]
      link
      fedilink
      English
      11 month ago

      Can you elaborate on how your backup script re-deploys on new hardware? Sounds very nice to have.

      • @[email protected]
        link
        fedilink
        English
        21 month ago

        elaborate

        It’s a really simple script.

        Everything is deployed with a docker compose, and all the docker volume data are bind mounts and, for example, a Jellyfin install would have everything in /stacks/jellyfin.

        The backup script makes a tarball of each service individually (and stops the stack if there’s anything in there doing database things or anything else that might end up being inconsistent by just archiving the filesystem), and uploads them to a S3 storage provider AND burns them to a BluRay.

        The recovery script does the opposite: it downloads and unarchives the data.

        As long as you’re on Linux and have Docker, it should just magically work.

          • @[email protected]
            link
            fedilink
            English
            21 month ago

            If you write the script yourself, just make sure you test it a couple of times, and preferably with different datasets from different runs.

            I found some edgecase stuff that would have prevented a restore even after I had tested it successfully (some permission issues due to changes in containers and whatnot were resulting in less than the expected data being archived and restored) a couple of times.

    • @Eximius
      link
      English
      11 month ago

      btrfs with its send/receive (incremental fs-level backups) is already stable enough for mostly everything (just has some issues with raid 5/6), and is much more performant than zfs. And it is also in the linux kernel tree (quite hugely useful). Of course, if more zfs-like functionality is what you look for.

      • @[email protected]
        link
        fedilink
        English
        51 month ago

        “Already stable enough”

        1. no it isn’t.
        2. if fucking should be, it’s been around 15 years!
        • @[email protected]
          link
          fedilink
          English
          3
          edit-2
          1 month ago

          My only experience with btrfs was when trying out Opensuse Tumbleweed. Within a couple days my home partition was busted, next time it was another partition. No idea if the problems could be fixed as these were fairly new installations to give Opensuse a try and I couldn’t be bothered to fix a system that’s troubling me from the very beginning.

          Between all the options that just work ™, btrfs is the one I’ve learned to stay away from.

          EDIT: that was four or five years ago

        • @thomasloven
          link
          English
          0
          edit-2
          1 month ago

          And I’ve been using it for eight six of those 15 in RAID 5/6 with zero issues, so YMMW I guess. Sorry you experienced problems.

      • @[email protected]
        link
        fedilink
        English
        41 month ago

        Honestly it’s not; BTRFS has been in my ‘that’s neat, but it’s still got a non-zero chance of deciding to light everything on fire because it’s bored’ list for, uh, a decade now?

        The NAS build is old enough to more or less predate BTRFS being usable (closing in on a decade since I did the initial OS install, jeez) and none of the features matter for what I’m storing: if every drive in my NAS died today, I’d be very annoyed for a couple of hours during the rebuild, and would lose terrabytes of linux ISOs that I can just download again, if I wanted to use Jellyfin to install them a 2nd time. (Any data I care about is pulled offsite at least once a day, so I’ve got pretty comprehensive backups minus the ISOs.)

        I know EXT4 and mergerfs and snapraid are not cool, or have shiny features, but I’ve also had zero problems with them over the last decade, even between Ubuntu upgrades (16.04, 18.04, 20.04, 22.04) and hardware platform upgrades (6600k, 8700k, 10950k) and the entire replacement of all the system drives (hdd -> ssd -> nvme) and the expansion of and replacement of dead HDDs, of varying sizes (4tb drives to 8tb drives to 16tb drives to some 20tb drives).

        It all just… worked, and at no point was I concerned about the filesystem not working if I replaced or upgraded or changed something, which is not something ZFS or BTRFS would have guaranteed during that same time window.

      • lemmyvore
        link
        fedilink
        English
        21 month ago

        IMHO 99% of the time btrfs features are used as a band-aid for things that would be much better done otherwise. Generally by using a stable distro and a decent backup solution (like Debian + Borg). And you get to use a truly stable, proven, boring fs ike ext4 or xfs.

        • @Eximius
          link
          English
          41 month ago

          Stable yes, but no protection from bitrot, and the journal of ext4 is the band aid, instead of a cow fs like zfs or btrfs.

          • lemmyvore
            link
            fedilink
            English
            11 month ago

            You can protect important data with backups, which you should do anyway, and in practice I feel like the added complexity of BTRFS and ZFS is not worth the COW.

            BTRFS is cool but they tried to cram way too much too fast into it and it added a ton of complexity and it’s still not 100% done after all these years. A COW mode for ext4 would have been adopted much faster.

  • @[email protected]
    link
    fedilink
    English
    531 month ago
    • you do not need kubernetes
    • you do not need anything to be „high availability”, that just adds a ton of complexity for no benefit. Nobody will die or go broke if your homelab is down for a few days.
    • tailscale is awesome
    • docker-compose is awesome
    • irreplaceable data gets one offsite backup, one local backup, and ideally one normally offline backup (in case you get ransomwared)
    • yubikeys are cool and surprisingly easy to use
    • don’t offer your services to other people until you are sure you can support it, your backups are squared away, and you are happy with how things are set up.
    • Encrypt-Keeper
      link
      English
      20
      edit-2
      1 month ago

      To piggy back on your “You don’t need k8s or high availability”,

      If you want to optimize your setup in a way that’s actually beneficial on the small, self hosted scale, then what you should aim for is reproducibility. Docker compose, Ansible, NixOS, whatever your pleasure. The ability to quickly take your entire environment from one box and move it to another, either because you’re switching cloud providers or got a nicer hardware box from a garage sale.

      When Linode was acquired by Akamai and subsequently renamed, I moved all my cloud containers to Vultr by rsyncing the folder structure to the new VM over SSH, then running the compose file on the new server. The entire migration short of changing DNS records took like 5 minutes of hands-on time.

        • @[email protected]
          link
          fedilink
          English
          21 month ago

          I’ve been in love with the concept of ansible since I discovered it almost a decade ago, but I still hate how verbose it is, and how cumbersome the yaml based DSL is. You can have a role that basically does the job of 3 lines of bash and it’ll need 3 yaml files in 4 directories.

          About 3 years ago I wrote a big ansible playbook that would fully configure my home server, desktop and laptop from a minimal arch install. Then I used said playbook for my laptop and server.

          I just got a new laptop and went to look at the playbook but realised it probably needs to be updated in a few places. I got feelings of dread thinking about reading all that yaml and updating it.

          So instead I’m just gonna rewrite everything in simple python with a few helper functions. The few roles I rewrote are already so much cleaner and shorter. Should be way faster and more user friendly and maintainable.

          I’ll keep ansible for actual deployments.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        1 month ago

        I have a k3s cluster for fun and I can admit that k8s is way too complicated.

        I don’t want to dig hours through documentation to find what I’m looking for. The docs sometimes feel like they were written for software devs and you should figure part of the solution yourself.

        I have a ExternalName service that keeps fucking up my cluster everytime it restarts, bringing down my ingresses, because for some reason it doesn’t work and I have no idea where to look at to figure out why it doesn’t work - I just end up killing the service and reapplying the yaml file and it works.

        I had to diagnose why my SSL certificates would get stuck in “issuing” in cert-manager, had to dig through 4 or 5 different resources until I got to an actual, descriptive error message telling me that I configured my ClusterIssuer wrongly.

        I wanted a k3s cluster to learn but every time I have issues with it I realize it’s a terrible idea.

        I wish I had podman + compose but it does seem like a docker-compose is more complicated. Also, I wish I could do ansible but I have no idea where to start (nor how it works).

        EDIT: oh yeah I also lost IPv6 support because k3s by default doesn’t enable v6 and I was planning on using Hetzner CCM to have a 2 node cluster until I realized Hetzner Networks don’t support v6.

      • @[email protected]
        link
        fedilink
        English
        11 month ago

        I had a similar experience with NixOS-anywhere and a VPS issue. Reset the OS, setup SSH key access and ran NixOS-anywhere and within like 15 minutes was back up and running.

      • mesamune
        link
        English
        11 month ago

        I just moved everything from vultr to self host because of their latest changes.

        • Encrypt-Keeper
          link
          English
          0
          edit-2
          1 month ago

          EDIT: As I suspected, the changes that u/mesamunefire is referencing are the ones that taken out of context awhile back and incorrectly assumed to apply to user VPS’ and the data on them, which is not the case. Those terms only apply to information posted publicly to their website, like the community forums.

          What changes would those be

          • @[email protected]
            link
            fedilink
            English
            4
            edit-2
            1 month ago

            Can’t speak for OP, but I bailed on Vultr because of how they handled the arbitration agreement change. Basically, I couldn’t access my containers without accepting the new TOS, so I “hacked” the website with Inspect Element so I could access support to close my account. For me, the arbitration change wasn’t the issue (my current host has similar policies), but being forced to accept a new TOS to use my account. I had no option do disagree or “remind me later,” I literally only had an “accept” button. I refuse to use any service that treats me like that.

            I’m now with Hetzner, so we’ll see if they pull that nonsense. I only use the VPS to get around my ISP’s CGNAT (WireGuard VPN w/ HAProxy at the edge to route domains), so if they pull the same nonsense, I’ll copy my config to another VPS.

            • mesamune
              link
              English
              11 month ago

              How is Hetzner?

              • @[email protected]
                link
                fedilink
                English
                21 month ago

                Seems to work fine and meets my needs. I’m in the US though, and options are pretty limited. So far the only snag I’ve had was having to upload my ID, I guess their signup process is a little strict.

                • mesamune
                  link
                  English
                  11 month ago

                  Nice. I’m all about new providers

          • mesamune
            link
            English
            1
            edit-2
            1 month ago

            https://old.reddit.com/r/webdev/comments/1boz5ne/vultr_new_tos_claims_all_commercial_rights_to/ " You hereby grant to Vultr a non-exclusive, perpetual, irrevocable, royalty-free, fully paid-up, worldwide license (including the right to sublicense through multiple tiers) to use, reproduce, process, adapt, publicly perform, publicly display, modify, prepare derivative works, publish, transmit and distribute each of your User Content, or any portion thereof, in any form, medium or distribution method now known or hereafter existing, known or developed, and otherwise use and commercialize the User Content in any way that Vultr deems appropriate, without any further consent, notice and/or compensation to you or to any third parties, for purposes of providing the Services to you."

            And you could not opt out. You had to click agree in order to login. That’s the biggest one.

            It was later removed after the fact but there were other changes that sucked.

            • mesamune
              link
              English
              1
              edit-2
              1 month ago

              I had customer data as well as some personal stuff on a couple of servers. It was low hanging fruit so I just started self hosting. It’s silly how much rights they suddenly wanted. Not worth the hassle, they just provide basic boxes to begin with.

              They also would not let you login without accepting those new rights now were you able to opt out. So I just threw my infa on some local systems, deleted everything and then had to say yes to their TOS. Again silly and great way to lose business.

            • Encrypt-Keeper
              link
              English
              1
              edit-2
              1 month ago

              That only applies to posts on their forums. Not the content on your VPS

              • mesamune
                link
                English
                01 month ago

                Nope. It’s the content.

                • Encrypt-Keeper
                  link
                  English
                  1
                  edit-2
                  1 month ago

                  Incorrect. It applies only to the forums. It does not apply in any way, shape, or form to your VPS or the content on it. It’s one thing to be mistaken, but let’s not spread misinformation on purpose.

                  A Reddit post incorrectly took portions of our Terms of Service out of context, which only pertain to content provided to Vultr on our public mediums (community-related content on public forums, as an example) for purposes of rendering the needed services – e.g., publishing comments, posts, or ratings. This is separate from a user’s own, private content that is deployed on Vultr services.

                  Since our inception, Vultr has been committed to upholding and adhering to the strictest data privacy and protection standards across the world (including HIPAA, GDPR, and DPDPA). Our customers own 100% of their content.

    • @[email protected]
      link
      fedilink
      English
      31 month ago

      Not needing Kubernetes is a broad statement. It allows for better management of storage and literally gives you a configurable reverse-proxy configured with YAML if you know what you’re doing.

      • Nomecks
        link
        fedilink
        English
        91 month ago

        Yes, but you don’t need Kubernetes from the start.

        • @[email protected]
          link
          fedilink
          English
          21 month ago

          Well I guess podman works fine for the first few months. Interestingly I still use build-ah heavily for building my custom images

          • Nomecks
            link
            fedilink
            English
            11 month ago

            I find a lot of stuff is using docker compose, which works with Podman, but using straight docker is easier, especially if it’s nothing web-facing

            • @[email protected]
              link
              fedilink
              English
              21 month ago

              Funnily enough Docker compose has never worked for me on Podman. There always seems to be something that is incompatible (also due to me running on Debian). However, I feel like it should become a standard amongst homelabbers and professionals to use Kubernetes manifests going forward, since it is the most portable.

      • @keyez
        link
        English
        21 month ago

        Heavy disagree on the storage statement from what I’ve used and seen but it works for lots of people so not going to detract. NFS is always a pain but longhorn seems to have advantages

        • @[email protected]
          link
          fedilink
          English
          21 month ago

          NFS is a pain, no question about it. I used to use longhorn but these days since I’m doing a single node k3s I’m just doing hostpath. It’s that PVCs make intuitive sense to me, but I guess podman will likely work just fine for such cases other than canary deployments and OOTB service-meshes

  • @[email protected]
    link
    fedilink
    English
    461 month ago

    I wish I knew not to trust closed source self-hosted applications, such as Plex. Would have saved a lot of time and money.

      • @[email protected]
        link
        fedilink
        English
        36
        edit-2
        1 month ago

        Plex is a great example here. I’ve been Hetzner customer for many many years, and bought a lifetime license to Plex. Only to receive few months later a notification from Plex that I am no longer allowed to self-host Plex for myself(and only myself) at Hetzner and that they will block all access to my self-hosted Plex instance. I tried to ask for leniency or a refund, but that was wasted effort as well.

        In short, I was caught on a crossfire when for-profit company tried to please hollywood by attempting to reduce piracy, so they could get new VC funding.

        I am now a happy Jellyfin user and warmly recommend all Plex users to try it, the Jellyfin community is awesome!

        (Use your favourite search engine to look up “Hetzner Plex ban” for more details)

          • @[email protected]
            link
            fedilink
            English
            11
            edit-2
            1 month ago

            Yes, correct.

            I apologize if someone misunderstood my reply, Plex was the bad actor here.

          • @[email protected]
            link
            fedilink
            English
            51 month ago

            Still with Hetzner yeah. Haven’t had to deal with Hetzner customer support in the recent years at all, but they have been great in the past.

  • wagesj45
    link
    fedilink
    361 month ago

    It is much easier to buy one “hefty” physical machine and run ProxMox with virtual machines for servers than it is to run multiple Raspberry Pis. After living that life for years, I’m a ProxMox shill now. Backups are important (read the other comments), and ProxMox makes backup/restore easy. Because eventually you will fuck a server up beyond repair, you will lose data, and you will feel terrible about it. Learn from my mistakes.

  • macniel
    link
    fedilink
    English
    311 month ago

    My reason for self hosting is being in control of my shit, and not the cloud provider.

    I run jellyfin, soulseek, freshRSS, audiobookshelf and nextcloud. All of that on a pi 4 with an SSD attached and then accessible via wireguard. Also that sad is accessible as nfs share.

    As I had already known Linux very well before I’ve started my own cloud, I didn’t really had to learn much.

    The biggest resource I could recommend is that GitHub repository where a huge amount of awesomely selfhosted solutions are linked.

  • @[email protected]
    link
    fedilink
    English
    17
    edit-2
    1 month ago

    2.What do you wish you would have known as a beginner starting out?

    Caddy. Once you try Caddy there’s no turning back to Nginx or Apache.

    • poVoq
      link
      fedilink
      English
      141 month ago

      That’s what everyone thinks for a while, and then they go back to Nginx.

      • @[email protected]
        link
        fedilink
        English
        41 month ago

        Eh, my main reason for switching is that Caddy builds in LetsEncrypt. My Caddyfile is really simple, it’s just a reverse proxy that handles TLS and proxies regular HTTP to my services. I don’t have it serving any files or really knowing anything about the services. Here’s my setup:

        1. HAProxy - directs subdomains to devices (in VPN) based on SNI
        2. Caddy - manages TLS and LetsEncrypt and communicates w/ services over HTTP
        3. Nginx - serves files for things like NextCloud, if needed (most services have their own HTTP server)

        Each of these are separate Docker containers, which makes it really easy to manage and diagnose problems. The syntax for Nginx is more complex for 1&2, and the performance benefit of managing it all in one service just isn’t relevant for a self-hosted system, so I use this layered approach that makes each level as simple as possible.

      • kadotux
        link
        fedilink
        English
        21 month ago

        As someone who just learned about Caddy, could you elaborate?

        • poVoq
          link
          fedilink
          English
          5
          edit-2
          1 month ago

          You usually want less integration, not more. Simple self-contained things. Nginx is good at that. That’s also why you don’t want to use Nginx Proxy Manager or Certbot’s Nginx integration etc. It first looks like they make it easier, but there is too much hidden complexity under the hood.

          Also, sooner or later you will run into some software that you would really like to try, which is only documented for Nginx and uses some sort of image caching or so, that is hard to replicate with Caddy etc.

          • Deemo
            link
            English
            21 month ago

            Certbot’s Nginx integration etc

            How do you obtain ssl certs then?

            • poVoq
              link
              fedilink
              English
              1
              edit-2
              1 month ago

              I switched to Dehydrated (with dns-01 challenge), but Certbot itself is fine, the problem is the Nginx integration that tries to automatically change your Nginx config files.

              • Deemo
                link
                English
                21 month ago

                Do you manually update the config every 3 months?

                • poVoq
                  link
                  fedilink
                  English
                  11 month ago

                  ??? The location and the file name of the certificates don’t change, so why would I have to do that?

                  On the contrary, before I disabled the certbot’s Nginx integration, every three months certbot would “manage” to break my Nginx and I had to manually repair it.

                  I think we are not talking about the same thing. I mean the Certbot extension that automatically modifies the Nginx config files. A telltale sign are usually the comments "#managed by certbot” that it likes to leave behind all over your config files.

          • @Zeoic
            link
            English
            11 month ago

            Not sure I agree about proxy manager. Anything you need to access is in the gui. You can easily add advanced configs to the entries. Been using it for 5 or so years, and there hasnt been anything I was missing from using straight nginx before that.

            • @[email protected]
              link
              fedilink
              English
              21 month ago

              The benefit of using config files is easy version management via git.
              Makes it easy to rebuild from scratch and easy to rollback a change that breaks something

              • @Zeoic
                link
                English
                11 month ago

                Fair enough. I manage the same by backing up the vm its on.

      • @keyez
        link
        English
        21 month ago

        I went from NGinx to HAProxy for 5 years, now on Caddy for 2 and loving it. So much simpler and efficient.

      • lemmyvore
        link
        fedilink
        English
        11 month ago

        I’m currently in the process of separating the certificate renewal service from the reverse proxy completely.

        But if you’re just starting out Nginx Proxy Manager makes it so easy.

          • lemmyvore
            link
            fedilink
            English
            3
            edit-2
            1 month ago

            It lets you change reverse proxy or run a website with TLS completely independently of the certbot. The certbot deals with obtaining certs and leaves them in a dir, and the proxies or webservers just take them from that dir. If the proxy container breaks the certbot still does its thing etc.

            It also makes it easier to do stuff like run different proxies in paralel for different things, chain proxies (for instance if you need to use a VPS because you can’t forward ports) and so on.

            But it’s all for advanced setups, for basic stuff I’d still go with NPM.

    • @[email protected]
      link
      fedilink
      English
      31 month ago

      Apparently traefik might be better if you run docker compose and such, as it does auto-discovery, which reduces the amount of manual configuration required.

      • @[email protected]
        link
        fedilink
        English
        11 month ago

        I maintain the DNS plugin for Vultr and I can say that it’s “safe”, but if you’re worried you should check their source code.

        I believe it’s easier to have a vulnerability in the external provider’s API (for example, caddy-dns/vultr uses govultr) than Caddy. But I wouldn’t take things for granted if I was skeptical about these plugins.

    • @Passerby6497
      link
      English
      21 month ago

      LOL, as a noob I went with caddy, then traefik before settling on NPM. Ironically, all the “QoL” features people brag about just made base configs harder and lead to shit randomly failing.

      NPM has been solid as a rock, even if I have to do slightly more work, it’s more reliable and does what I want quicker and easier than the alternative.

    • @[email protected]
      link
      fedilink
      English
      11 month ago

      I’ve been meaning to try Caddy, but I just can’t even imagine something simpler than NginxProxyManager.

  • Max-P
    link
    fedilink
    English
    171 month ago

    I’ll parrot the top reply from Reddit on that one: to me, self hosting starts as a learning journey. There’s no right or wrong way, if anything I intentionally do whacky weird things to test the limits of my knowledge. The mistakes and troubles are when you learn. You don’t really understand the significance of good backups until you had to restore from them.

    Even in production, it differs wildly. I have customers whom I set up a bare metal Ubuntu in some datacenter for cheap, they’ve been running on that setup for 10 years. Small mom and pop shop, they will never need a whole cluster of machines. Then at my day job we’re looking at things like Kubernetes and very heavyweight stacks because we handle a lot of traffic.

    Some people self-host a PiHole on a Raspberry Pi and that’s all they need. Some people have entire NAS setups with smart TVs accessing their Plex/Jellyfin servers for the whole extended family. I host my own emails, which is a pain in the ass to get working reliably and clean your IP reputation.

    I guess the only thing you should know is, you need some time to commit to maintaining your stuff if you don’t want it to break or get breached (if exposed to the Internet), and a willingness to learn because self hosting isn’t a turnkey experience. It can be a turnkey installation but when your SD card/drives fails you’re still on your own to troubleshoot and fix it. You don’t set a NextCloud server to replace Google Drive with the expectation that you shove the server in a closet forever. Owning your infrastructure and data comes at a small but very important upkeep time investment.

  • Presi300
    link
    English
    14
    edit-2
    1 month ago

    Benefits:

    • Cheap storage that I can use both locally and as a private cloud. Very convenient for piracy storing all my legally obtained files.

    • Network wide adblocking. Massive for mobile games/apps.

    • Pivate VPN. Really useful for using public networks and bypassing network restrictions.

    • Gives me an excuse to buy really cool, old server and networking hardware.

    As for things I wish I knew… Don’t use windows for servers. Just don’t.

    SMB sucks, try NFS.

    Use docker, managing 5 or 10 different apps without containers is a nightmare.

    Bold of you to assume I’m a computer scientist or engineer or that I have a degree lmao. I just hate ads, subscriptions and network restrictions, so I learned how to avoid those things. As for resources to get started… Look up TrueNAS scale. It basically does all of the work for you.

    • @rekorse
      link
      English
      11 month ago

      How’s the network wide ad blocking work, that would solve a big issue with my kids.

      • icedterminal
        link
        English
        11 month ago

        You either set the DNS settings per device to the system running PiHole / AdGuard Home, or if your router allows, set the DNS there. It’s ideal to set it on the router.

        Any time a device makes a DNS request to a domain, it’s checked against the list. If found, it’s stopped. If not found, it gets sent upstream to your choice of a public DNS configured during setup. I use Cloudflare (1.1.1.1, 1.0.0.1).

  • @kitnaht
    link
    English
    11
    edit-2
    1 month ago
      • Learning. If you ever found yourself tired of learning new things, your life is basically done.
      • Cost. You already have an internet connection at home. It’s practically a necessity these days. The connection is likely fast enough for most things. Renting even the most piddly of VPS is wildly expensive. Just throw a spare machine at it and go wild.
      • Freedom. Your own data is constantly being collected, regurgitated, and sold back to you. More people need to care about this incessant invasion of our lives.
      • Backups. 3 copies, on different forms of storage, in multiple PHYSICALLY distinct locations. Just when you have that teeny little imp in the back of your mind say “hmm, I should probably back up soon” – stop everything you’re doing and run a backup.
      • Test your recovery! Backups are only good if you can recover from them. Many have lost data because they failed to ever fail-test their backups.
      • Google. Legitimately the best skill you can ever attain is simply being able to search effectively and be able to learn jargon quickly. Once you have the lingo down, searches become clearer, quicker, more precise.
    • @ChapulinColorado
      link
      English
      1
      edit-2
      1 month ago

      For #1 I would say not to focus on learning the same kind of thing that you started at some point recently. It took me a few months to get my local setup going since I would do it after work (also similar skills) and get tired of poking around.

      At some point I gave up and started doing other things that brought me joy (video games, paint night with YouTube tutorials, movies/TV). When I finally decided to get back to it, it was enjoyable again. If I have to re-do it from scratch it could be done in probably a few hours or at most some nights after work and would be enjoyable since the annoying “got ya” lessons are somewhere on memory or some searches away that could be filtered much quicker.

  • nfh
    link
    English
    81 month ago
    1. I’ve learned a number of tools I’d never used before, and refreshed my skills from when I used to be a sysadmin back in college. I can also do things other people don’t loudly recommend, but fit my style (Proxmox + Puppet for VMs), which is nice. If you have the right skills, it’s arbitrarily flexible.

    2. What electricity costs in my area. $0.32/KWh at the wrong time of day. Pricier hardware could have saved me money in the long run. Bigger drives could also mean fewer, and thus less power consumption.

    3. Google, selfhosting communities like this one, and tutorial-oriented YouTubers like NetworkChuck. Get ideas from people, learn enough to make it happen, then tweak it so you understand it. Repeat, and you’ll eventually know a lot.

    • @tburkhol
      link
      English
      01 month ago
      1. What electricity costs in my area. $0.32/KWh at the wrong time of day.

      I assume you have this on a UPS. What about using a smart plug to switch to UPS during the expensive part of the day, then back to mains to charge when it’s cheaper? I imagine that needs a bigger UPS than one would ordinarily spec, and that cost would probably outweigh the electric bill, but never know.

      • @[email protected]
        link
        fedilink
        English
        51 month ago

        That’s not really what a UPS is designed for, they’re meant to last minutes. Long enough for a clean shutdown or to start a generator.

        You’d want something like a whole house battery backup instead.

  • @[email protected]
    link
    fedilink
    English
    71 month ago

    Podman quadlets have been a blessing. They basically let you manage containers as if they were simple services. You just plop a container unit file in /etc/containers/systemd/, daemon-reload and presto, you’ve got a service that other containers or services can depend on.

    • @rekorse
      link
      English
      31 month ago

      Is containers here used in the same context as docker? I’m not familiar with podman.

      • mesamune
        link
        English
        31 month ago

        Just about but it’s more experimental.

  • Ebby
    link
    fedilink
    English
    71 month ago
    1. data stays local for the most part. Every file you send to the cloud becomes property of the cloud. Yeah, you get access, but so does the hosting provider, their 3rd party resources, and typical government compliances. Hard drives are cheap and fast enough.

    2. not quite answering this right, but I very much enjoy learning and evolving. But technology changes and sometimes implementing new software like caddy/traefik on existing setups is a PITA! I suppose if I went back in time, I would tell myself to do it the hard way and save a headache later. I wouldn’t have listened to me though.

    3. Portainer is so nice, but has quirks. It’s no replacement for the command line, but wow, does it save time. The console is nerdy, but when time is on the line, find a good GUI.

    • @ChapulinColorado
      link
      English
      11 month ago

      For item #1, self hosted solutions like home assistant also allow using “smart” devices without the cloud in some instances. You are not at the mercy of a vendor going out of business or dropping support and your devices becoming bricks.

      Not all devices are compatible, but from what I’ve learned, I would never buy another device with so called “smart” features if it is not compatible with home assistant.

  • @[email protected]
    link
    fedilink
    English
    71 month ago
    1. less is more, it’s fine to sunset stuff you don’t use enough to afford them using cpu cycles, memory and power
    2. search warrants are a real thing and you should not trust others to use your infrastructure responsibly because you will be the one paying for it if they don’t.
    • Flax
      link
      fedilink
      English
      41 month ago

      Is there a story attached to no. 2?

      • @[email protected]
        link
        fedilink
        English
        31 month ago

        Well, turns out that when you host a private service that allows others to share files, they might share files that they are not allowed to share. But in return your door gets kicked in in the morning and suddenly no one wants to take credit for the actual upload anymore.

        • @[email protected]
          link
          fedilink
          English
          21 month ago

          Yeah… Becoming a public-facing file host for others to use seem rather irresponsible.

          If/when a user’s given a means of uploading files to my server, there’s no method/permissions for them to share those files with others; it’s really just for them to send files to me. (Filebrowser is pretty good for that)

          That and almost nothing is public access; auth or gtfo.

  • poVoq
    link
    fedilink
    English
    61 month ago

    For 2.: use dns-01 challenge to generate wildcard SSL certs. Saves so much time and nerves.

    • @ShortFuse
      link
      English
      2
      edit-2
      1 month ago

      My open-source, zero dependency JS library for requesting and generating certs with dns01: https://github.com/clshortfuse/acmejs

      I only coded for name.com but it is compatible with anything really. Also can run in the browser, which could be useful in a pinch.

  • @[email protected]
    link
    fedilink
    English
    61 month ago
    1. Our internet goes out periodically, so having everything local is really nice. I set up DNS on my router, so my TLS certs work fine without hitting the internet.
    2. I wish someone would’ve taught me how to rip blurays. It wasn’t a big deal, but everything online made it sound super sketchy flashing firmware onto a Bluray drive.
    3. I’m honestly not sure. I’m in CS and am really into Linux, so I honestly don’t know what would be helpful. I guess start small and get one thing working at a time. There’s a ton of resources online for all kinds of skill levels, and as long as you do one thing at a time, you should eventually see success.