• @[email protected]
    link
    fedilink
    104
    edit-2
    10 months ago

    If you can fuck up a database in prod you have a systems problem caused by your boss. Getting fired for that shit would be a blessing because that company sucks ass.

    • @fidodo
      link
      English
      5210 months ago

      What if you’re the one that was in charge of adding safe guards?

      • @[email protected]
        link
        fedilink
        4010 months ago

        Never fire someone who fucked up (again; it isn’t their fault anyways). They know more about the system than anyone. They can help fix it.

        • @raldone01
          link
          English
          14
          edit-2
          1 month ago

          This is the way usually but some people just don’t learn from their mistakes…

      • @[email protected]
        link
        fedilink
        English
        510 months ago

        If you are adding guardrails to production… It’s the same story.

        Boss should purchase enough equipment to have a staging environment. Don’t touch prod, redeploy everything on a secondary, with the new guardrails, read only export from prod, and cutover services to the secondary when complete.

    • @[email protected]
      link
      fedilink
      910 months ago

      Small companies often allow devs access to prod DBs. It doesn’t change the fact that it’s a catastrophically stupid decision, but you often can’t do anything about it.

      And of course, when they inevitably fuck up the blame will be on the IT team for not implementing necessary restrictions.

      Frequent snapshots ftmfw.

  • @[email protected]
    link
    fedilink
    8410 months ago

    I always run my queries in a script that will automatically rollback if the number of rows changed isn’t one. If I have to change multiple rows I should probably ask myself what am I doing.

    • @assembly
      link
      8610 months ago

      Damn that’s a good idea. Going to write that down, put it in the to do list, and regret not dosing it.

    • @takeda
      link
      37
      edit-2
      10 months ago

      I always start a session with disabling auto commit (note, I could add it to my settings, but then it would backfire that one time my settings don’t execute, so I’m making it a habit to type it out every time, first thing I connect)

      BTW: what kind of genius decides that auto commit should be enabled by default?

    • @[email protected]
      link
      fedilink
      1
      edit-2
      10 months ago

      Or at least run it in the test database first.

      Or run your updates/deletes as select first.

  • @marcos
    link
    5910 months ago

    Don’t you people have a development environment?

  • @[email protected]
    link
    fedilink
    4410 months ago

    If he recognized his typo with the space after the D:\ in his restore command he could have been saved at the bargaining stage. I am so glad I don’t work with this stuff anymore.

  • @[email protected]
    link
    fedilink
    1810 months ago

    A few months back I crashed a db in prod. I detached it and when I tried to reattach it simply refused, saying it was corrupted or some shit.
    Lucky me we have a backup solution.
    Unfortunately it was being upgraded, with difficulties.
    That was a long day.

  • NoLifeGaming
    link
    1010 months ago

    Makes me think of what happened to gitlab

  • @[email protected]
    link
    fedilink
    610 months ago

    I have several times insisted that a migration be done via an ad hoc endpoint, because I’m a jerk, but also it’s much easier then to test, and no one has to yolo connect directly to prod.

    • @[email protected]
      link
      fedilink
      110 months ago

      Endpoint? Why the fuck is a migration using an endpoint, if you want testability a script will do just fine

      • @[email protected]
        link
        fedilink
        210 months ago

        Because I didn’t want someone to yolo connect to production, and we don’t have infrastructure in place for running arbitrary scripts against production. An http endpoint takes very little time to write, and let’s you take advantage of ci/cd/test infrastructure that’s already in place.

        This was for a larger more complicated change. Smaller ones can go in as regular data migrations in source control, but those still go through code review and get deployed to dev before going out.

        • @[email protected]
          link
          fedilink
          110 months ago

          You’re definitely over complicating it and adding unnecessary bloat/tech debt. But if it works for you then it works