• @[email protected]
    link
    fedilink
    English
    281 year ago

    This made me chuckle for a good 10 minutes!

    At work we’re currently in the last layer of the iceberg with 35+ microservices, with ten different Kubernetes instances for different uses and a supported OnPrem version.

    It is bit of a learning curve and we definitely have two “mono-services” that we’re actively braking down due to it accumulating seven years worth of different ideas and implementations.

    I think currently I’m still heavily in favor in microservices in a project of our scale as it easily let’s us enhance, trash, or reimplement different areas of the app; but man is it a pain in the ass to manage sometimes 😂

    • @[email protected]OP
      link
      fedilink
      English
      141 year ago

      I think we have ~400 microservices of varying types that deploy in many ways to many places (big proponents of using the right tools for the job rather than forcing preferred tools) and definitely in the last block. Although, as a DevOps guy my life would be a lot easier if we had a handful of monster monoliths, I understand it doesn’t make sense for our scale. I can fantasize though, and this meme hits extremely close to home 😅

      Tangentially, at my previous job we were in blocks 4 and 5 of transitioning away from a single monolith. Major issues arise when a “Java only shop for 20 years” start down this path with an extreme mindset of “we only use Java”. Java kubernetes controllers? lmfao, no thanks (they wanted them though 😑)

  • @[email protected]
    link
    fedilink
    English
    111 year ago

    Trying to do Postgresql TLS /w Internal PKI chain created by Cert-Manager made me want to throw my laptop out the window yesterday.

    This stuff is hard.

    • @[email protected]OP
      link
      fedilink
      English
      81 year ago

      Just tell the security team to handle it 😎

      (My security team would NOT be amused by this joke suggestion)

    • @vapeloki
      link
      English
      81 year ago

      Use a postgresql operator for that.

      How many postgresql databases without replication and backup if have seen … and 90% of then contained critical data.

      If you really need to run the db inside containers, never by hand.

      And as a full time postgresql dba: NEVER run your production databases inside k8s

        • andrew
          link
          fedilink
          English
          7
          edit-2
          1 year ago

          Because dogma. There are tons of places running production postgres, and indeed many other stateful services, in Kubernetes.

          Edit because presumably GP downvoted me for contradicting them, since I’ve personally overseen this in production at Fortune 100 companies and unicorn startups alike:

          https://dok.community/

          https://github.com/zalando/postgres-operator

          And plenty of YouTube videos from various kubecons and CloudNativeCons. Kubernetes is a runtime and provides plenty of primitives for safely running stateful workloads even better than otherwise possible. Anyone who says otherwise hasn’t bothered directly learning enough about the possibilities and is likely citing oft-quoted dogma that dates back to the earliest days of k8s and was questionable even then.

          • @vapeloki
            link
            English
            2
            edit-2
            1 year ago

            No, because of many factors:

            The last 10 years, I managed 700 database Clusters. Anything from a few megs to terrabyte database sizes.

            The main issues are:

            • huge pages. They work an bare metal, not so much in kubernetes ( I am not talking about transparent huge pages…)
            • Core/cache locality.
            • NUMA

            And of course: maintainability. Especially in the PIT recovery.

            Zalando has an open source operator + patroni bases images. They work for many cases, and are a great way if you can’t manage postgresql on bare metal.

            And of course: if you have everything running on k8s, running a few bare metal servers for a db is a pain in the ass, and in such cases it is if course better to just deploy an operator in your cluster and let it handle the heavy lifting like backup and replication.

        • andrew
          link
          fedilink
          English
          21 year ago

          Generally if someone tells you to never do something, even if they’re a supposed authority, and they don’t offer reasoning, it’s probably better to investigate further.

  • @[email protected]
    link
    fedilink
    English
    91 year ago

    It’s always the full circle - from monolith, to micro services, then someone just copies the code back to the main repo, and we’re back to square one.

  • DonjonMaister
    link
    fedilink
    English
    81 year ago

    I’m starting to learn full-stack development. This meme scares me, ngl.

    • @[email protected]
      link
      fedilink
      English
      51 year ago

      It’s not so bad, microservices let’s you focus on your current task instead of having to deal with years upon years of legacy code every day.

      Microservices can get messy but they are much better than the alternative even if you aren’t looking at performance and scalability.

  • @marcos
    link
    English
    31 year ago

    We are at layer 5 at work, blissfully ignoring anything deeper. What is quite the thing because microservices don’t become very useful until you reach layer 6.

    But well, I stopped advocating for consolidating most of the services a couple of years ago. AFAIK, we need some 3 large services and everything else should be a monolith.

  • @casualbrow
    link
    English
    21 year ago

    Don’t do my boy monorepo like that

    • stevecrox
      link
      fedilink
      6
      edit-2
      1 year ago

      The big argument for mono repos is checking out multiple repositories is “hard” while checking out one repository is “easy” but…

      Service Oriented Architecture became a thing because monolithic code bases were often becoming spaghetti. I worked on a project where removing an option from a preference window (max map zoom), broke a message table (because the number of visible rows in a table (not its size in the UI) was linked to the max zoom you supplied to a map library, for no reason).

      Thus the idea you should wrap everything you do as a self contained service, with a known interface. The idea being you could write an entirely new implementation of a service, implement the interface and everything would work. Microservices are a continuation of this idea.

      Yet every node/python based mono repo I have seen will have python files directly imported filed from inside anouther component/service. Not simply common aretfacts but all sorts of random parts. Subverting the concept of micro service (and recreating the problem).

      Separate repositories block this because each repository will be built in isolation on a CI, flagging the link. This forces you to release each repository and pull things in as a dependency. Which encourages you to design code to support that.

      A common monorepo problem is to shove everything in a docker image and call it a day. Then if you need a class from one monorepo in anouther one, you don’t have an artefacts so lazy devs just copy/paste files between monorepos.

      Monorepos aren’t bad practice by themselves, they encourage bad practice. Separate repositories encourage good practice (literally the need to manage them separately drives it).

    • @ShadyGrove
      link
      English
      01 year ago

      Yea monorepos are fine, good even.

  • @ShadyGrove
    link
    English
    11 year ago

    This is my life right now, throw in event sourcing and streaming and I got myself a stew. A shit stew.