I could research this on my own, but was interested in hearing from the community.

Software tends to fall in categories based on who has control, how it is accessed, and who owns the data.

For instance, a FOSS project hosts encrypted user data for free, and the user easily controls who accesses it, but if the server/service goes down, users lose access to everything. Or, a user has their own offline files they control 100%, but sharing is more cumbersome.

Where does git fall in this spectrum? It seems that it’s a mix, where authoritative copies may be offline at times before merging, when it returns to the hosted version. Its hosted, but can be self-hosted, and multiple copies of code canbee offline as well. Does it rely on a central source hosting, and a company willing to support the software?

I’ve never contributed to a project with version control before, though I’ve worked in a few places that used JIRA or git. It interests me how it works, and I’m just curious to read a Lemmy discussion while it’s raining where I am.

(As I prepare to press SUBMIT it occurs to me this is a FOSS question more than a Linux one. If this is a stupid post for this /r/, please report/remove or ask me to and I will.)

  • @[email protected]
    link
    fedilink
    28
    edit-2
    1 year ago

    When you make a project with git, what you’re doing is essentially making a database to control a sequence of changes (or history) that build up your codebase. You can send this database to someone else (or in other words they can clone it), and they can make their own changes on top. If they want to send you changes back, they can send you “patches” to apply on your own database (or rather, your own history).

    Note: everything here is decentralized. Everyone has the entire history, and they send history they want others to have. Now, this can be a hassle with many developers involved. You can imagine sending everyone patches, and them putting it into their own tree, and vice versa. It’s a pain for coordination. So in practice what ends up happening is we have a few (or often, one) repo that works as a source of truth. Everyone sends patches to that repo - and pulls down patches from that repo. That’s where code forges like GitHub come in. Their job is to control this source of truth repo, and essentially coordinate what patches are “officially” in the code.

    In practice, even things like the Linux kernel have sources of truth. Linus’s tree is the “true” Linux, all the maintainers have their own tree that works as the source of truth for their own version of Linux (which they send changes back to Linus when ready), and so on. Your company might have their own repo for their internal project to send to the maintainers as well.

    In practice that means everyone has a copy of the entire repo, but we designate one repo as the real one for the project at hand. This entire (somewhat convoluted mess) is just a way to decide - “where do I get my changes from”. Sending your changes to everyone doesn’t scale, so in practice we just choose who everyone coordinates with.

    Git is completely decentralized (it’s just a database - and everyone has their own copy), but project development isn’t. Code forges like GitHub just represent that.

    • @s38b35M5OP
      link
      English
      41 year ago

      So even if github and gitlab and similar were shutdown, the data (code) being worked on can live on, and isn’t tied to the platform, right?

      • @[email protected]
        link
        fedilink
        8
        edit-2
        1 year ago

        Well the bugtracker and additional features are not inside of the git repository. So they’d get lost. But each ‘git clone’ is a complete clone of the (source code) repository including all of the history of changes, the commit messages, dates and individual changes. That’s stored on every single computer that cloned the repository and you have a copy of everything locally. Though it might be out of date if you didn’t pull the latest changes. But apart from that it’s the same data that Github stores. You could just make it available somewhere else and continue.

        • @s38b35M5OP
          link
          English
          41 year ago

          Thanks for illuminating some black box for me!

  • @[email protected]
    link
    fedilink
    8
    edit-2
    1 year ago

    I think there isn’t really something “authoritative” in Git. You can upload your changes somewhere or another developer can download changes from you. You can also all make incompatible changes and then you won’t be able to sync it anymore (you’d need to fix that first and manually handle the conflict). There’s nothing authoritive in it. In practice most people choose a central place and all upload their changes there and everybody else regularly pulls them from there. But you could as well directly do it with the computer of your colleague if you have a network connection and access to it. Files including history of changes are the same on every machine and server. (If they’re all up to date). It’s like storing a directory including past versions on 10 different computers.

  • @TheLordHumungus
    link
    11 year ago

    Download all the builds from git and manual back them up or there are programs to do it for you. I usually used an old laptop connected to multiple HDDs to back up onto (I haven’t done this for a few years now).

  • @Eideen
    link
    English
    01 year ago

    To be maintained, any software needs to be supported. If not supported and development, other options will prevail.