I’m pretty new to the fediverse, and I find the idea amazing. But one thing concerns me though. How will server owners be able to afford to run servers with massive amounts of data coming through them? Theoretically speaking, if a Reddit migration were to happen how would server upkeep costs look like?

  • @acchariya
    link
    English
    11 year ago

    Just reading through this it seems crazy to me that lemmy.world is being scaled vertically, is there something about how it works that prevents horizontal scaling (like, load balancing across a number of servers all using the same db)?

    • Forkk
      link
      fedilink
      English
      11 year ago

      As far as I can tell, the software just wasn’t built with that in mind, so I would expect some kind of bugs or weird behavior like race conditions, etc. Nothing is stopping anyone from trying it to see what happens though I guess.

    • mo_ztt ✅
      link
      English
      11 year ago

      I am trying to start a project with a fairly ambitious goal, trying to take load off the central instance to reduce hosting costs (whether that comes in the form of a single powerful server or multiples pointed at the same DB). It’s still in early form, but the core (trying to make it so running a Lemmy node is not too punishing on the main instance server) is an attempt to do the engineering to help accomplish exactly this.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      From what I understand, Lemmy is just using a PostgreSQL database, and there’s many ways to load balance and horizontally scale it. You could use something like HAProxy for load balancing, and for horizontal scaling, you could add multiple PostgreSQL slave nodes and if you want to manage the scaling automatically, you could use a tool like ClusterControl or w/e. There’s plenty of doco on the web around this and it’s not specific to Lemmy.

      • @acchariya
        link
        English
        11 year ago

        You can scale app servers surprisingly far before you need to shard a decently sized single master postgres cluster. Like, probably 20-50+ times the current write traffic it seems this instance has. It was probably due to the websockets thing, thinking about it. With 0.18, that goes away and requests can be stateless, cached, and you can throw a load balancer and n app servers at it.