• andrew
      link
      fedilink
      English
      5811 months ago

      It last ran a week ago and we technically haven’t tested it. Just our hot replicas which also just deleted all that data.

      • @datelmd5sum
        link
        511 months ago

        ah the cold sweat and clenching of the anus

    • @Alexstarfire
      link
      1111 months ago

      Back up? No, we only go forward in this company

      • @[email protected]
        link
        fedilink
        311 months ago

        “That’s why the windshield is bigger than the rear view mirror, we should be vigilant in remaining forward looking.”

        Said by an exec in my chain of command when he caused a huge cascading fuck up in the organization and there was no postmortem allowed.

    • @IHawkMike
      link
      2211 months ago

      I thing the technical term for this is an RGE.

      (Resume Generating Event)

    • @toxic_cloud
      link
      2311 months ago

      Doctors HATE this one simple trick! Lose up to 100% of MyChart data - and KEEP it off!

      Can help reduce blood pressure, high cholesterol, weight, height, gender, name and more to NULL! Wake up feeling NULL and NULL!

  • @[email protected]
    link
    fedilink
    6611 months ago

    For everyone’s sanity, please restrict access to the prod DB to like two people. No company wants that to happen to them, and no developer wants to do that.

      • @breadsmasher
        link
        English
        3511 months ago

        Datagrip has an option, and likely other database IDEs do as well - “Connect as READONLY”. Makes me feel a little safer

          • @finestnothing
            link
            511 months ago

            I don’t use readonly with dbeaver, but I do have the prod servers set to automatically make transactions and have to hit a button to commit. Almost certain it asks confirmation that I want to make the changes to prod which is nice too (I rarely have to touch our sql server prod)

    • @[email protected]
      link
      fedilink
      2611 months ago

      Just a funny story. All of our devs and even BAs used to have prod access. We all knew this was a bad idea and put in a process of hiring a DBA.

      I think in the first two weeks the DBA screwed up prod twice. I can’t remember the first mess up but the second he had a lock on the database and then went to lunch.

      We eventually hired two awesome DBAs to replace that one but oh boy.

      • @[email protected]
        link
        fedilink
        1411 months ago

        Imagine being hired to help prevent people from fucking something up, only to fuck that thing up in your first week—not once, but twice. You’d think after the first time it wouldn’t happen again…

    • @[email protected]
      link
      fedilink
      411 months ago

      I would say you can expand that on the following criteria: 1) a lot of people can have read access, but only a few should have write access, and read access should be restricted to specific tables without PII. 2) The people with write access should go through a Change Approval process: they submit the SQL they’re going to run and someone else approves or denies it before it can be done. 3) Every piece of SQL that modifies a table should be annotated with a comment and the ticket number in it in which that change was approved. 4) You should be able to rollback any committed change within an hour of it happening.

  • palordrolap
    link
    fedilink
    58
    edit-2
    11 months ago

    8388409 = 2^23 - 199

    I may have noticed this on a certain other aggregator site once upon a time, but I’m still none the wiser as to why.

    199 rows kind of makes sense for whatever a legitimate query might have been, but if you’re going to make up a number, why 2^23? Why subtract? Am I metaphorically barking up the wrong tree?

    Is this merely a mistyping of 8388608 and it was supposed to be ±1 row? Still the wrong (B-)tree?

    WHY DO I CARE

      • palordrolap
        link
        fedilink
        2311 months ago

        In a place for programmer humour, you’ve got to expect there’s at least one person who knows their powers of two. (Though I am missing a few these days).

        As for considering me to be Ramanujan reborn, if there’s any of Srinivasa in here, he’s not been given a full deck to work with this time around and that’s not very karmic of whichever deity or deities sent him back.

        • Fuck spez
          link
          fedilink
          English
          1211 months ago

          I know up to like 2^16 or maybe 2^17 while sufficiently caffeinated. Memorizing up to, or beyond, 2^23 is nerd award worthy.

          • @[email protected]
            link
            fedilink
            611 months ago

            I know that 2^20 is something more that a million because is the maximum number of rows excel can handle.

          • palordrolap
            link
            fedilink
            1
            edit-2
            11 months ago

            For me it’s: 2^1 to 2^16 (I remember the 8-bit era), a hazy gap and then 2^24 (the marketing for 24 bit colour in the 90s had 16777216 plastered all over it). Then it’s being uncomfortably lost up to 2^31 and 2^32, which I usually recognise when I see them (hello INT_MAX and UINT_MAX), but I don’t know their digits well enough to repeat. 2^64 is similar. All others are incredibly vague or unknown.

            2^23 as half of 2^24 and having a lot of 8s in it seems to have put it into the “recognisable” category for me, even if it’s in that hazy gap.

            So I grabbed a calculator to confirm.

    • @mrbaby
      link
      1511 months ago

      And you can save a bunch of time by inlining all this into one query

    • @Agent641
      link
      511 months ago

      The four horsemen of the datapocalypse

      • Juja
        link
        911 months ago

        The select after the update is to check if the update went through properly. You can have more selects before the update if you wanted to.

  • @[email protected]
    link
    fedilink
    36
    edit-2
    11 months ago

    Ah reminds me of the time (back in the LAMP days) when I tried to apply this complicated equation that sales had come up with to our inventory database. This was one of those “just have the junior run it at midnight” type of shops. Anyway, I made a mistake and ended up exactly halving all inventory prices on production. See OP’s picture for my face.

    In retrospect, I’m thankful for that memory.

    • @Agent641
      link
      11
      edit-2
      11 months ago

      Ive had one of those moments. Where you fuck up so bad that your emotions wrap all the way around from panic, through fear, confusion, rage, dread and back to neutral, and you go 'Hmm…"

      • @[email protected]
        link
        fedilink
        6
        edit-2
        11 months ago

        Yeah that’s a good way to put it. It’s like so close to the thing you were dreading, that it’s a sort of sick relief when it actually happens.

        It’s like…

        "just like the simulations" meme

  • @Rhinoshock
    link
    3011 months ago

    In T-SQL:

    BEGIN TRANSACTION

    {query to update/delete records}

    (If the query returned the expected amount of affected rows)

    COMMIT TRANSACTION

    (If the query did not return the expected amount of affected rows)

    ROLLBACK TRANSACTION

    Note: I’ve been told before that this will lock the affected table(s) until the changes made are committed or rolled back, but after looking it up it looks like it depends on a lot of minor details. Just be careful if you use it in production.

    • @jaybone
      link
      711 months ago

      Lol why did I have to scroll so far to see ROLLBACK

      • @[email protected]
        link
        fedilink
        311 months ago

        Because this is c/programmerhumor and the OP hasn’t covered ROLLBACK yet in his sophomore DB class.

    • @[email protected]
      link
      fedilink
      411 months ago

      If for example a client application is (accidentally) firing doubled requests to your API, you might get deadlocks in this case. Which is not bad per se, as you don’t want to conform to that behaviour. But it might also happen if you have two client applications with updates to the same resource (patching different fields for example), in that case you’re blocking one party so a retry mechanism in the client or server side might be a solution.

      Just something we noticed a while ago when using transactions.

  • kamen
    link
    2511 months ago

    This is now the correct database.

    • @[email protected]
      link
      fedilink
      English
      1911 months ago

      I don’t understand environments that don’t wrap things in transactions by default.

      Especially since an update or delete without a where clause is considered valid.

      • @finestnothing
        link
        9
        edit-2
        11 months ago

        I’m a data engineer that occasionally has to work in sql server, I use dbeaver and have our prod servers default to auto-wrap in transactions and I have to push a button and confirm I know it’s prod before it commits changes there, it’s great and has saved me when I accidentally had a script switch servers. For the sandbox server I don’t have that on because the changes there don’t matter except for testing, and we can always remake the thing from scratch in a few hours. I haven’t had an oppsie yet and I hope to keep that streak

      • @Ultraviolet
        link
        English
        311 months ago

        SQL Server technically does behind the scenes, but automatically commits, which kind of defeats the purpose.

    • @[email protected]
      link
      fedilink
      English
      3811 months ago

      Checking the backups… Ah yes, the backup done in August 2017.

      Hello boss, I broke the company. I’ll see myself out

      • Rosco
        link
        fedilink
        911 months ago

        You should take it upon yourself to make regular backups in case you fuck up really bad. I had an intern that deleted everything on its fifth day, luckily l was automatically making backups two times a day, so it was fine.

          • Rosco
            link
            fedilink
            611 months ago

            Company was a shitshow, new features or changes were expected immediately, so we got used to work directly on prod. I told him to test anything on a dummy DB and show me before we submit it, but he got around it when I wasn’t looking. The security tools were garbage, I wasn’t allowed to change permissions.

              • Rosco
                link
                fedilink
                1
                edit-2
                11 months ago

                I left to pursue my studies, the intern took my place and was put in charge of everything, I don’t know how he’s doing now and I don’t really care.

        • @[email protected]
          link
          fedilink
          English
          211 months ago

          Yep I do that on a local project basis before I make any updates. Saved me a couple times from my own mistakes 😅

  • @[email protected]
    link
    fedilink
    English
    1711 months ago

    You can also do this by forgetting a WHERE clause. I know this because I ruined a production database in my early years.

    Always write your where before your insert, kids.

    • @themusicman
      link
      611 months ago

      Always start every command with EXPLAIN and don’t remove it until you’ve run it

    • @bitflag
      link
      211 months ago

      I learned the same lesson the same way 😞