• @[email protected]
    link
    fedilink
    13324 days ago

    You already stopped Steven in a prior commit.

    Also, if this is an organization setting, I’m extremely disappointed in your PR review process. If someone is committing vendor code to the repo someone else should reject the pull.

    • body_by_make
      link
      fedilink
      10324 days ago

      What if I told you a lot of companies don’t have solid review requirement processes? Some barely use version control at all

        • @[email protected]
          link
          fedilink
          2624 days ago

          Yeah… Usually if you join a company with bad practices it’s because the people who already work there don’t want to do things properly. They tend to not react well to the new guy telling them what they’re doing wrong.

          Only really feasible if you’re the boss, or you have an unreasonable amount of patience.

          • @[email protected]
            link
            fedilink
            324 days ago

            Usually, the boss (or people above the boss) are the one’s stopping it. Engineers know what the solution is. They may still resent the new guy saying it, though, because they’ve been through this fight already and are tired.

        • Ephera
          link
          fedilink
          924 days ago

          Eh, if everyone knows what they’re doing, it can be much better to not have it and rather do more pairing.

          But yes, obviously Steven does not know what they’re doing.

          • Ethan
            link
            fedilink
            English
            924 days ago

            Better to not have version control!? Dear god I hope I never work on anything with you.

            • Doc Avid Mornington
              link
              fedilink
              English
              16
              edit-2
              24 days ago

              Pretty sure they meant to not have review. Dropping peer review in favor of pair programming is a trendy idea these days. Heh, you might call it “pairs over peers”. I don’t agree with it, though. Pair programming is great, but two people, heads together, can easily get on a wavelength and miss the same things. It’s always valuable to have people who have never seen the new changes take a look. Also, peer review helps keep the whole team up to date on their knowledge of the code base, a seriously underrated benefit. But I will concede that trading peer review for pair programming is less wrong than giving up version control. Still wrong, but a lot less wrong.

              • Ethan
                link
                fedilink
                English
                223 days ago

                Agreed. Even self-reviewing a few days after I wrote the code helps me see mistakes.

              • Ephera
                link
                fedilink
                124 days ago

                Well, to share my perspective – sorry, I mean, to explain to you why you’re wrong and differing opinions are unacceptable:

                I find that pairing works best for small teams, where everyone is in the loop what everyone else is working on, and which don’t have a bottleneck in terms of a minority having much more skill or knowledge in the project.

                In particular, pairing is far more efficient at exchanging information. Not only is it a matter of actively talking to one another just being quicker at bringing information across, there is also a ton of information about code, which will not make it into the actual code.

                While coding, you’ve tried two or three approaches, you couldn’t write it as you expected or whatever. The final snippet of code looks as if you wrote it, starting in the top-left and finishing bottom-right, with maybe one or two comments explaining a particularly weird workaround, but I’d wager more than 90% of the creation process is lost.

                This means that if someone needs to touch your code, they will know practically none of how it came to be and they will be scared of changing more about it than at all necessary. As a result, all code that gets checked in, needs to be as perfect as possible, right from the start.

                Sharing all the information from the creation process by pairing, that empowers a team to write half-baked code. Because enough people know how to finish baking it, or how to restructure it, if a larger problem arises.

                Pairing is fickle, though. A bad management decision can easily torpedo it. I’m currently in a project, where we practically cannot pair, because it’s 4 juniors that are new to the project vs. 2 seniors that built up the project.
                Not only would we need to pair in groups of three to make that work at all, it also means we need to use the time of the seniors as efficiently as possible and rather waste the time of the juniors, which is where a review process excels at.

            • Ephera
              link
              fedilink
              224 days ago

              Ah, no, I meant a review process. Version control is always a good idea.

              • Ethan
                link
                fedilink
                English
                223 days ago

                Ah, yeah that makes a lot more sense

      • @dylanTheDeveloper
        link
        12
        edit-2
        23 days ago

        I’ve seen people trade zip archives like Yo-Ge-oh cards useing excel as a source control manager so it could be much MUCH worse

        • @littlewonder
          link
          10
          edit-2
          23 days ago

          Dude, put content warnings on this. I have trauma from shared drives and fucking Jared leaving the Important File open on his locked computer while he takes off for a week, locking out access to anyone else.

      • @brlemworld
        link
        2
        edit-2
        24 days ago

        Those companies probably also pay ABSOLUTE SHITTER

  • @[email protected]
    link
    fedilink
    11624 days ago

    Correct me if I’m wrong, but it’s not enough to delete the files in the commit, unless you’re ok with Git tracking the large amount of data that was previously committed. Your git clones will be long, my friend

    • @Backfire
      link
      12524 days ago

      You’d have to rewrite the history as to never having committed those files in the first place, yes.

      And then politely ask all your coworkers to reset their working environments to the “new” head of the branch, same as the old head but not quite.

      Chaos ensues. Sirens in the distance wailing.

        • @Klear
          link
          2324 days ago

          History is written by the victors. The rest of us have to nuke the project and start over.

      • @Dultas
        link
        1924 days ago

        If this was committed to a branch would doing a squash merge into another branch and then nuking the old one not do the trick?

        • @Backfire
          link
          1324 days ago

          Yes, that would do the trick

      • @[email protected]
        link
        fedilink
        4124 days ago

        No, don’t do that. That modifies the commit hashes, so tags no longer work.

        git clone --filter=blob:none is where it’s at.

        • @[email protected]
          link
          fedilink
          English
          17
          edit-2
          24 days ago

          I don’t understand how we’re all using git and it’s not just some backend utility that we all use a sane wrapper for instead.

          Everytime you want to do anything with git it’s a weird series or arcane nonsense commands and then someone cuts in saying “oh yeah but that will destroy x y and z, you have to use this other arcane nonsense command that also sounds nothing like you’re trying to do” and you sit there having no idea why either of them even kind of accomplish what you want.

          • @[email protected]
            link
            fedilink
            2124 days ago

            It’s because git is a complex tool to solve complex problems. If you’re one hacker working alone, RCS will do an acceptable job. As soon as you add a second hacker, things change and RCS will quickly show its limitations. FOSS version control went through CVS and SVN before finally arriving at git, and there are good reasons we made each of those transitions. For that matter, CVS and SVN had plenty of arcane stuff to fix weird scenarios, too, and in my subjective experience, git doesn’t pile on appreciably more.

            You think deleting an empty directory should be easy? CVS laughs at your effort, puny developer.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              24 days ago

              It’s because git is a complex tool to solve complex problems. If you’re one hacker working alone, RCS will do an acceptable job. As soon as you add a second hacker, things change and RCS will quickly show its limitations. FOSS version control went through CVS and SVN before finally arriving at git, and there are good reasons we made each of those transitions. For that matter, CVS and SVN had plenty of arcane stuff to fix weird scenarios, too, and in my subjective experience, git doesn’t pile on appreciably more.

              Yes it is a complex tool that can solve complex problems, but me as a typical developer, I am not doing anything complex with it, and the CLI surface area that’s exposed to me is by and large nonsense and does not meet me where I’m at or with the commands or naming I would expect.

              I mean NPM is also a complex tool, but the CLI surface area of NPM is “npm install”.

              • @[email protected]
                link
                fedilink
                624 days ago

                I am not doing anything complex with it

                So basic, well documented, easily understandable commands like git add, git commit, git push, git branch, and git checkout should have you covered.

                the CLI surface area that’s exposed to me is by and large nonsense and does not meet me where I’m at

                What an interesting way to say “git has steep learning curve”. Which is true, git takes time to learn and even more to master. You can get there solely by reading the man pages and online docs though, which isn’t something a lot of other complex tools can say (looking at you kubernetes).

                Also I don’t know if a package manager really compares in complexity to git, which is not just a version control tool, it’s also a thin interface for manipulating a directed acyclic graph.

                • @[email protected]
                  link
                  fedilink
                  English
                  1
                  edit-2
                  23 days ago

                  So basic, well documented, easily understandable commands like git add, git commit, git push, git branch, and git checkout should have you covered.

                  You mean: git add -A, git commit -m "xxx", git push or git push -u origin --set-upstream, etc. etc. etc. I get that there’s probably a reason for it’s complexity, but it doesn’t change the fact that it doesn’t just have a steep learning curve, it’s flat out remarkably user unfriendly sometimes.

              • @maryjayjay
                link
                124 days ago

                Git is too hard for you. Please stop using it

          • @[email protected]
            link
            fedilink
            924 days ago

            There are tons of wrappers for git, but they all kinda suck. They either don’t let you do something the cli does, so you have to resort to the arcane magicks every now and then anyways. Or they just obfuscate things to the point where you have no idea what it’s doing, making it impossible to know how to fix things if (when) it fucks things up.

          • @[email protected]
            link
            fedilink
            623 days ago

            Git is complicated, but then again, it’s a tool with a lot of options. Could it be nicer and less abstract in its use? Sure!

            However, if you compare what goes does, and how it does, to it’s competitors, then git is quite amazing. 5-10 years ago it was all svn, the dark times. Simpler tool and an actual headache to use.

          • @[email protected]
            link
            fedilink
            124 days ago

            I think in this case, “depth” was am inferior solution to achieve fast cloning, that they could quickly implement. Sparse checkout (“filter”) is the good solution that only came out recently-ish

          • @[email protected]
            link
            fedilink
            023 days ago

            You are not entirely wrong, but just as some advice I would refrain from displaying fear of the command line in interviews.

            • @[email protected]
              link
              fedilink
              English
              3
              edit-2
              23 days ago

              Lol if an employer can’t have an intelligent discussion about user friendly interface design I’m happy to not work for them.

              Every interview I’ve ever been in there’s been some moment where I say ‘yeah I don’t remember that specific command, but conceptually you need to do this and that, if you want I can look up the command’ and they always say something along the lines of ‘oh no, yeah, that makes conceptual sense don’t worry about it, this isn’t a memory test’.

                • @[email protected]
                  link
                  fedilink
                  English
                  3
                  edit-2
                  23 days ago

                  Command line tools can be, git’s interface is not. There would not be million memes about exiting vim if it was.

        • @[email protected]
          link
          fedilink
          4
          edit-2
          24 days ago

          What are you smoking? Shallow clones don’t modify commit hashes.

          The only thing that you lose is history, but that usually isn’t a big deal.

          --filter=blob:none probably also won’t help too much here since the problem with node_modules is more about millions of individual files rather than large files (although both can be annoying).

          • @[email protected]
            link
            fedilink
            124 days ago

            From github’s blog:

            git clone --depth=1 <url> creates a shallow clone. These clones truncate the commit history to reduce the clone size. This creates some unexpected behavior issues, limiting which Git commands are possible. These clones also put undue stress on later fetches, so they are strongly discouraged for developer use. They are helpful for some build environments where the repository will be deleted after a single build.

            Maybe the hashes aren’t different, but the important part is that comparisons beyond the fetched depth don’t work: git can’t know if a shallowly cloned repo has a common ancestor with some given commit outside the range, e.g. a tag.

            Blobless clones don’t have that limitation. Git will download a hash+path for each file, but it won’t download the contents, so it still takes much less space and time.

            If you want to skip all file data without any limitations, you can do git clone --filter=tree:0 which doesn’t even download the metadata

            • @[email protected]
              link
              fedilink
              224 days ago

              Yes, if you ask about a tag on a commit that you don’t have git won’t know about it. You would need to download that history. You also can’t in general say “commit A doesn’t contain commit B” as you don’t know all of the parents.

              You are completely right that --depth=1 will omit some data. That is sort of the point but it does have some downsides. Filters also omit some data but often the data will be fetched on demand which can be useful. (But will also cause other issues like blame taking ridiculous amounts of time.)

              Neither option is wrong, they just have different tradeoffs.

    • @[email protected]
      link
      fedilink
      024 days ago

      See this is the kind of shit that bothers me with Git and we just sort of accept it, because it’s THE STANDARD. And then we crank attach these shitty LFS solutions on the side because it don’t really work.

      Give me Perforce, please.

      • @MinFapper
        link
        2124 days ago

        What was perforce’s solution to this? If you delete a file in a new revision, it still kept the old data around, right? Otherwise there’d be no way to rollback.

        • @[email protected]
          link
          fedilink
          9
          edit-2
          24 days ago

          Yes but Perforce is a (broadly) centralised system, so you don’t end up with the whole history on your local computer. Yes, that then has some challenges (local branches etc, which Perforce mitigates with Streams) and local development (which is mitigated in other ways).

          For how most teams work, I’d choose Perforce any day. Git is specialised towards very large, often part time, hyper-distributed development (AKA Linux development), but the reality is that most teams do work with a main branch in a central location.

  • mox
    link
    fedilink
    58
    edit-2
    24 days ago

    I can’t see past the word wrap implementation in that UI. Mo dules indeed.

    • mac
      link
      fedilink
      123 days ago

      Looks like the GitHub android app .

  • Ephera
    link
    fedilink
    4524 days ago

    Wow, that’s 300k lines of text that anyone, who clones the repo, has to download.

    • Ephera
      link
      fedilink
      27
      edit-2
      24 days ago

      I really don’t think so. The documentation says nothing of the like.

      Maybe someone thought it’s a regex pattern, where escaping dots would make sense. But yeah, it mostly works like glob patterns instead.

    • @brlemworld
      link
      324 days ago

      Also don’t need trailing slash

  • @BeefPiano
    link
    English
    1824 days ago

    Why do I get the feeling this is Steven’s commit?