• andrew
      link
      fedilink
      English
      581 year ago

      It last ran a week ago and we technically haven’t tested it. Just our hot replicas which also just deleted all that data.

      • @datelmd5sum
        link
        51 year ago

        ah the cold sweat and clenching of the anus

    • @Alexstarfire
      link
      111 year ago

      Back up? No, we only go forward in this company

      • @[email protected]
        link
        fedilink
        31 year ago

        “That’s why the windshield is bigger than the rear view mirror, we should be vigilant in remaining forward looking.”

        Said by an exec in my chain of command when he caused a huge cascading fuck up in the organization and there was no postmortem allowed.

    • @IHawkMike
      link
      221 year ago

      I thing the technical term for this is an RGE.

      (Resume Generating Event)

    • @toxic_cloud
      link
      231 year ago

      Doctors HATE this one simple trick! Lose up to 100% of MyChart data - and KEEP it off!

      Can help reduce blood pressure, high cholesterol, weight, height, gender, name and more to NULL! Wake up feeling NULL and NULL!

  • @[email protected]
    link
    fedilink
    661 year ago

    For everyone’s sanity, please restrict access to the prod DB to like two people. No company wants that to happen to them, and no developer wants to do that.

      • @breadsmasher
        link
        English
        351 year ago

        Datagrip has an option, and likely other database IDEs do as well - “Connect as READONLY”. Makes me feel a little safer

          • @finestnothing
            link
            51 year ago

            I don’t use readonly with dbeaver, but I do have the prod servers set to automatically make transactions and have to hit a button to commit. Almost certain it asks confirmation that I want to make the changes to prod which is nice too (I rarely have to touch our sql server prod)

    • @[email protected]
      link
      fedilink
      261 year ago

      Just a funny story. All of our devs and even BAs used to have prod access. We all knew this was a bad idea and put in a process of hiring a DBA.

      I think in the first two weeks the DBA screwed up prod twice. I can’t remember the first mess up but the second he had a lock on the database and then went to lunch.

      We eventually hired two awesome DBAs to replace that one but oh boy.

      • @[email protected]
        link
        fedilink
        141 year ago

        Imagine being hired to help prevent people from fucking something up, only to fuck that thing up in your first week—not once, but twice. You’d think after the first time it wouldn’t happen again…

    • @[email protected]
      link
      fedilink
      41 year ago

      I would say you can expand that on the following criteria: 1) a lot of people can have read access, but only a few should have write access, and read access should be restricted to specific tables without PII. 2) The people with write access should go through a Change Approval process: they submit the SQL they’re going to run and someone else approves or denies it before it can be done. 3) Every piece of SQL that modifies a table should be annotated with a comment and the ticket number in it in which that change was approved. 4) You should be able to rollback any committed change within an hour of it happening.

  • palordrolap
    link
    fedilink
    58
    edit-2
    1 year ago

    8388409 = 2^23 - 199

    I may have noticed this on a certain other aggregator site once upon a time, but I’m still none the wiser as to why.

    199 rows kind of makes sense for whatever a legitimate query might have been, but if you’re going to make up a number, why 2^23? Why subtract? Am I metaphorically barking up the wrong tree?

    Is this merely a mistyping of 8388608 and it was supposed to be ±1 row? Still the wrong (B-)tree?

    WHY DO I CARE

      • palordrolap
        link
        fedilink
        231 year ago

        In a place for programmer humour, you’ve got to expect there’s at least one person who knows their powers of two. (Though I am missing a few these days).

        As for considering me to be Ramanujan reborn, if there’s any of Srinivasa in here, he’s not been given a full deck to work with this time around and that’s not very karmic of whichever deity or deities sent him back.

        • Fuck spez
          link
          fedilink
          English
          121 year ago

          I know up to like 2^16 or maybe 2^17 while sufficiently caffeinated. Memorizing up to, or beyond, 2^23 is nerd award worthy.

          • @[email protected]
            link
            fedilink
            61 year ago

            I know that 2^20 is something more that a million because is the maximum number of rows excel can handle.

          • palordrolap
            link
            fedilink
            1
            edit-2
            1 year ago

            For me it’s: 2^1 to 2^16 (I remember the 8-bit era), a hazy gap and then 2^24 (the marketing for 24 bit colour in the 90s had 16777216 plastered all over it). Then it’s being uncomfortably lost up to 2^31 and 2^32, which I usually recognise when I see them (hello INT_MAX and UINT_MAX), but I don’t know their digits well enough to repeat. 2^64 is similar. All others are incredibly vague or unknown.

            2^23 as half of 2^24 and having a lot of 8s in it seems to have put it into the “recognisable” category for me, even if it’s in that hazy gap.

            So I grabbed a calculator to confirm.

    • @mrbaby
      link
      151 year ago

      And you can save a bunch of time by inlining all this into one query

    • @Agent641
      link
      51 year ago

      The four horsemen of the datapocalypse

      • Juja
        link
        91 year ago

        The select after the update is to check if the update went through properly. You can have more selects before the update if you wanted to.

  • @[email protected]
    link
    fedilink
    36
    edit-2
    1 year ago

    Ah reminds me of the time (back in the LAMP days) when I tried to apply this complicated equation that sales had come up with to our inventory database. This was one of those “just have the junior run it at midnight” type of shops. Anyway, I made a mistake and ended up exactly halving all inventory prices on production. See OP’s picture for my face.

    In retrospect, I’m thankful for that memory.

    • @Agent641
      link
      11
      edit-2
      1 year ago

      Ive had one of those moments. Where you fuck up so bad that your emotions wrap all the way around from panic, through fear, confusion, rage, dread and back to neutral, and you go 'Hmm…"

      • @[email protected]
        link
        fedilink
        6
        edit-2
        1 year ago

        Yeah that’s a good way to put it. It’s like so close to the thing you were dreading, that it’s a sort of sick relief when it actually happens.

        It’s like…

        "just like the simulations" meme

  • @Rhinoshock
    link
    301 year ago

    In T-SQL:

    BEGIN TRANSACTION

    {query to update/delete records}

    (If the query returned the expected amount of affected rows)

    COMMIT TRANSACTION

    (If the query did not return the expected amount of affected rows)

    ROLLBACK TRANSACTION

    Note: I’ve been told before that this will lock the affected table(s) until the changes made are committed or rolled back, but after looking it up it looks like it depends on a lot of minor details. Just be careful if you use it in production.

    • @jaybone
      link
      71 year ago

      Lol why did I have to scroll so far to see ROLLBACK

      • @[email protected]
        link
        fedilink
        31 year ago

        Because this is c/programmerhumor and the OP hasn’t covered ROLLBACK yet in his sophomore DB class.

    • @[email protected]
      link
      fedilink
      41 year ago

      If for example a client application is (accidentally) firing doubled requests to your API, you might get deadlocks in this case. Which is not bad per se, as you don’t want to conform to that behaviour. But it might also happen if you have two client applications with updates to the same resource (patching different fields for example), in that case you’re blocking one party so a retry mechanism in the client or server side might be a solution.

      Just something we noticed a while ago when using transactions.

  • kamen
    link
    251 year ago

    This is now the correct database.

    • @[email protected]
      link
      fedilink
      English
      191 year ago

      I don’t understand environments that don’t wrap things in transactions by default.

      Especially since an update or delete without a where clause is considered valid.

      • @finestnothing
        link
        9
        edit-2
        1 year ago

        I’m a data engineer that occasionally has to work in sql server, I use dbeaver and have our prod servers default to auto-wrap in transactions and I have to push a button and confirm I know it’s prod before it commits changes there, it’s great and has saved me when I accidentally had a script switch servers. For the sandbox server I don’t have that on because the changes there don’t matter except for testing, and we can always remake the thing from scratch in a few hours. I haven’t had an oppsie yet and I hope to keep that streak

      • @Ultraviolet
        link
        English
        31 year ago

        SQL Server technically does behind the scenes, but automatically commits, which kind of defeats the purpose.

    • @[email protected]
      link
      fedilink
      English
      381 year ago

      Checking the backups… Ah yes, the backup done in August 2017.

      Hello boss, I broke the company. I’ll see myself out

      • Rosco
        link
        fedilink
        91 year ago

        You should take it upon yourself to make regular backups in case you fuck up really bad. I had an intern that deleted everything on its fifth day, luckily l was automatically making backups two times a day, so it was fine.

          • Rosco
            link
            fedilink
            61 year ago

            Company was a shitshow, new features or changes were expected immediately, so we got used to work directly on prod. I told him to test anything on a dummy DB and show me before we submit it, but he got around it when I wasn’t looking. The security tools were garbage, I wasn’t allowed to change permissions.

              • Rosco
                link
                fedilink
                1
                edit-2
                1 year ago

                I left to pursue my studies, the intern took my place and was put in charge of everything, I don’t know how he’s doing now and I don’t really care.

        • @[email protected]
          link
          fedilink
          English
          21 year ago

          Yep I do that on a local project basis before I make any updates. Saved me a couple times from my own mistakes 😅

  • @[email protected]
    link
    fedilink
    English
    171 year ago

    You can also do this by forgetting a WHERE clause. I know this because I ruined a production database in my early years.

    Always write your where before your insert, kids.

    • @themusicman
      link
      61 year ago

      Always start every command with EXPLAIN and don’t remove it until you’ve run it

    • @bitflag
      link
      21 year ago

      I learned the same lesson the same way 😞