I can’t help but feel overwhelmed by the sheer complexity of self-hosting modern web applications (if you look under the surface!)

Most modern web applications are designed to basically run standalone on a server. Integration into an existing environment a real challenge if not impossible. They often come with their own set of requirements and dependencies that don’t easily align with an established infrastructure.

“So you have an already running and fully configured web server? Too bad for you, bind me to port 443 or GTFO. Reverse-proxying by subdomain? Never heard of that. I won’t work. Deal with it. Oh, and your TLS certificates? Screw them, I ship my own!”

Attempting to merge everything together requires meticulous planning, extensive configuration, and often annoying development work and finding workarounds.

Modern web applications, with their elusive promises of flexibility and power, have instead become a source of maddening frustration when not being the only application that is served.

My frustration about this is real. Self-hosting modern web applications is an uphill battle, not only in terms of technology but also when it comes to setting up the hosting environment.

I just want to drop some PHP files into a directory and call it a day. A PHP interpreter and a simple HTTP server – that’s all I want to need for hosting my applications.

  • wpuckering
    link
    fedilink
    English
    81 year ago

    Containers really shine in the selfhosting world in modern times. Complete userspace isolation, basically no worries about dependencies or conflicts since it’s all internally shipped and pre-configured, easy port mapping, immutable “system” files and volume mounting for persistent data… And much more. If built properly, container images solve almost all problems you’re grappling with.

    I can’t imagine ever building another application myself without containerization ever again. I can’t remember the last time I installed any kind of server-side software directly on a host without containerization, with the exception of packages required by the host that are unavoidable to support containers or to increase security posture.

    I’m my (admittedly strong) opinion, it’s absolute madness, and dare I say, reckless and incomprehensible, why anybody would ever create a brand new product that doesn’t ship via container images in this day and age, if you have the required knowledge to make it happen, or the capacity to get up to speed to learn how to make it happen (properly and following best practices of course) in time to meet a deadline.

    I’m sure some would disagree or have special use-cases they could cite where containers wouldn’t be a good fit for a product or solution, but I’m pretty confident that those would be really niche cases that would apply to barely anyone.

  • poVoq
    link
    fedilink
    English
    71 year ago

    While php is still cool… join the dark side and start using containers 😏

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      Yeah, I can’t imagine going back to not using containers. Call me a script kiddy if you want but I can copy paste some environment variables into a Docker Compose and stand up a new service in ten minutes.

      I’m not going to say it’s always smooth sailing. I’ve definitely had containers with frustrating complications that took some sorting out. But man, if you want to just drop some files in a directory and go? Just get on board the Docker train and save yourself the headache.

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    I recently(ish) installed Unraid on a new NAS, as I’d heard good things but knew nothing about it. Didn’t really intend to install much on it, but got playing around with the Docker stuff built into it and… fuck me. The amount of time I used to spend installing dependancies, configuring stuff, trying to work out why the hell it wasn’t working. With really not much work I’ve got a fully fledged Arr setup with Jellyfin, got a full dev environment, Grafana and influx for monitoring, automated tls certs, and a bunch of other things all working pretty damn flawlessly.

    Containers are awesome.

    • 𝘋𝘪𝘳𝘬OP
      link
      fedilink
      English
      11 year ago

      Yes, containers could be the way – if every applications would come in a container or it were super easy to containerize them without the applications knowing it.

      Can I run half a dozen of applications in containers that all need port 443 and how annoying is it to set it up?

      • Skull giver
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        [This comment has been deleted by an automated system]

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    The thing that boils my blood is secret sqlite databases. I just want to store my volumes on an NAS using NFS, and run the stacks on a server built for it. Having a container randomly blows up because an undocumented sqlite database failed to get a lock sucks ass.

    • 𝘋𝘪𝘳𝘬OP
      link
      fedilink
      English
      11 year ago

      secret sqlite databases

      The thing is: “secret”. SQLite databases in general are awesome. Basically no need to configuration. They just work and don’t even need an own server and in 99% of all cases they’re absolutely enough for what they used for. I’d always chose a SQLite database over anything else - but it should made clear that such a database is used.

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    Sadly, a PHP dev environment and a webserver is not enough for modern devs.

    I just ended up installing proxmox, and everything I install gets it’s own VM. It binds to the port it wants, and my public IP port 443 binds to a VM win ngnix. If you hit a subdomain, ngnix proxies the request to the actual server and port. Servers can ship whatever certificates they want, my ngnix is the one clients negociate SSL with, so it has its own certificate. The only other thing running on that server is certbot.

    It’s honestly much simpler this way. Need to restart a machine after install? Everything stays up. One of the software needs glibc version fuck my ass? Don’t care that machine will have that version of glibc and I will not touch it. Software has a memory leak? Qemu doesn’t, and the VM is limited in ram so only that is crashing.

    Just asked sure your VM template is good (and has your ssh key installed) and you’re golden. Before this week’s internet outage, I had 99.999% uptime with a single hypervisor, and the only monitoring I have is just uptime of all services as seen from AWS. I don’t even have alerts.

    I sometimes long for the days (that I missed, I’m only 24) of monolithic Linux servers where you have a webserver, a database server and that’s it. Sadly, VMs are cheap and dependencies hell. It’s still quite fun to tinker in the virtualized world. It’s just not the same as what has been.

  • D-RAJ
    link
    fedilink
    English
    11 year ago

    Perhaps a solution like CloudPanel or Cloudron would make self-hosting multiple sites / apps easier for you. I use CloudPanel to host multiple Wordpress websites and it works very well. I use Cloudron to quickly deploy various open-source apps on one VPS.

    https://www.cloudpanel.io

    https://www.cloudron.io

  • 𝘋𝘪𝘳𝘬OP
    link
    fedilink
    English
    11 year ago

    Sometimes venting off a little helps a a little. I finally sat down and learned the basics of docker and found an easy to follow video series on how to setup Docker with Portainer and Nginx Proxy Manager. Works like charm. I also set up my GoToSocial instance again but failed at setting up a Lemmy instance … but I guess that’s for another discussion :)

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    Docker containers do pretty much solve that, drop a docker-compose.yml file in place, maybe tweak a few lines, and that’s all.

  • grin
    link
    fedilink
    English
    01 year ago

    Not sure what’s the problem though. Pull up a reverse proxy, and give all the crappy shit a private ip and whatever port they want, and access it through the proxy, and everyone can be on 443. 127.42.1.123:443, whatever. Maybe use real containers, or that crappy docker shit, both offer you independent namespaces with all the port and whatnot.