I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc.

But since I started to use docker, most of the docker-compose files come with their own instance of postgres. Until now I just let them do it and were running a couple of instances of postgres. But it’s kind of getting rediciolous how many postgres instances I run on one server.

Do you guys run several dockerized instances of postgres or do you rewrite the docker compose files to give access to your one central postgres instance? And are there usually any problems with that like version incompatibilities, etc.?

  • @[email protected]
    link
    fedilink
    English
    138 months ago

    I only use the provided docker-compose as inspiration and do my own thing

    This is the correct way to look at it. Most applications that provide a docker compose do so as a convenience to get started quickly. It’s not necessarily what you should run.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      8 months ago

      It is recommended to run postgres for each service though since they may have completely different needs / configurations for the queries to be optimal. For self hosting Lemmy and matrix would be the big concerns here.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        8 months ago

        It is recommended to run postgres for each service

        Absolute sentences like this are rarely true. Sometimes it does make sense and sometimes it doesn’t. One database is often quite capable of supporting the needs of many applications. And sometimes you need to fine-tune things for a specific application.

        • @[email protected]
          link
          fedilink
          English
          28 months ago

          Say what you want it’s a recommendation and it’s documented in quite a few deployment methods. The only benefit of centralizing it is if you are managing postres without other tools since it’d be a pain in the butt. You’ll still run into apps that doesn’t run on later versions and others that require later versions though.

          An example of a very popular one:

          How many databases should be hosted in a single PostgreSQL instance?

          Our recommendation is to dedicate a single PostgreSQL cluster (intended as primary and multiple standby servers) to a single database, entirely managed by a single microservice application. However, by leveraging the “postgres” superuser, it is possible to create as many users and databases as desired (subject to the available resources).

          The reason for this recommendation lies in the Cloud Native concept, based on microservices. In a pure microservice architecture, the microservice itself should own the data it manages exclusively. These could be flat files, queues, key-value stores, or, in our case, a PostgreSQL relational database containing both structured and unstructured data. The general idea is that only the microservice can access the database, including schema management and migrations.

          CloudNativePG has been designed to work this way out of the box, by default creating an application user and an application database owned by the aforementioned application user.

          Reserving a PostgreSQL instance to a single microservice owned database, enhances:

          resource management: in PostgreSQL, CPU, and memory constrained resources are generally handled at the instance level, not the database level, making it easier to integrate it with Kubernetes resource management policies at the pod level
          physical continuous backup and Point-In-Time-Recovery (PITR): given that PostgreSQL handles continuous backup and recovery at the instance level, having one database per instance simplifies PITR operations, differentiates retention policy management, and increases data protection of backups
          application updates: enable each application to decide their update policies without impacting other databases owned by different applications
          database updates: each application can decide which PostgreSQL version to use, and independently, when to upgrade to a different major version of PostgreSQL and at what conditions (e.g., cutover time)
          
          • @[email protected]
            link
            fedilink
            English
            18 months ago

            You’re talking about a microservices architecture running in a kubernetes cluster? FFS… 🙄

            That’s a ridiculous recommendation for a home-gamer. It’s all up to how you want to manage dependencies, backups, performance, etc. If one is happy to have a single instance then there’s nothing wrong with that. If one wants multiple instances for other reasons that’s fine too. There are pros and cons to each approach. Your “I saw somebody recommend it on the internets” notwithstanding.

            • @[email protected]
              link
              fedilink
              English
              28 months ago

              It’s the one I’m using but it’s not just running in a cluster. Even some applications recommend running separately like matrix. You can’t run everything on the same.versiom all the time anyways.

              • @[email protected]
                link
                fedilink
                English
                18 months ago

                You can’t run everything on the same.versiom all the time anyways.

                Unless you’re doing something very specific with the database - yes you can. Most applications are fine with pretty generic SQL. For those that have specific requirements, well then give them their own instance. Or use that version for the ones that don’t much care…