• @[email protected]
    link
    fedilink
    529 months ago

    I am learning flatpak. Can someone explain why is like that???

    • Ananace
      link
      fedilink
      659 months ago

      Well, one part of it is that Flatpak pulls data over the network, and sometimes data sent over a network doesn’t arrive in the exact same shape as when it left the original system, which results in that same data being sent in multiple copies - until one manages to arrive correctly.

      • @AProfessional
        link
        English
        349 months ago

        deleted by creator

        • @[email protected]
          link
          fedilink
          109 months ago

          I think this is actually very unlikely, flatpak is most likely using some TCP based protocol and TCP would take care of this transparently, flatpak wouldn’t know if any packets had to be retransmitted.

          • @AProfessional
            link
            English
            39 months ago

            deleted by creator

            • @FooBarrington
              link
              19 months ago

              But it’s unlikely those would look like this. Flatpak would only see packets in series, so the only effect a failure could have would be the need for resuming the download - at worst you could receive a couple of bytes again due to alignment, but not this much.

              • @AProfessional
                link
                English
                2
                edit-2
                9 months ago

                deleted by creator

      • @[email protected]
        link
        fedilink
        English
        79 months ago

        Could also be that the HTTP server lied about the content length.

        • @[email protected]
          link
          fedilink
          English
          19 months ago

          It’s a protocol violation to do that, not least because it precludes connection reuse

      • @[email protected]
        link
        fedilink
        29 months ago

        Hence why Fedora Linux actually recently removed delta updates for DNF. Turns out it used more data in retries than just downloading a whole package again.

        • boredsquirrel
          link
          fedilink
          19 months ago

          Interesting, didnt know that! That sounds like a fixable issue though…

          • @[email protected]
            link
            fedilink
            29 months ago

            I think they have moved from trying to fix it in DNF, to using the capabilities found in BTRFS for Copy on write. Can’t quite remember exactly.

      • @[email protected]
        link
        fedilink
        English
        19 months ago

        ??? Retransmitted packets don’t get counted towards downloaded file size

    • @[email protected]
      link
      fedilink
      179 months ago

      something something ostree and how complicated the stuff it does actually is

      • boredsquirrel
        link
        fedilink
        1
        edit-2
        9 months ago

        I mean ostree is just git for binaries, isnt it?

        But it will likely be the issue here.

    • lemmyvore
      link
      fedilink
      English
      149 months ago

      Shoddy implementation they can’t be arsed to fix. They do all kinds of shenanigans like show the size of all locales but only download one, or the other way around, it does not count dependencies and then realizes it has to download something extra etc. It’s all over the place and I’ve given up on it making any sense. I’ve just made sure it’s on a drive with plenty of space and hope for the best.

      • @AProfessional
        link
        English
        5
        edit-2
        9 months ago

        deleted by creator