Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • Fushuan [he/him]
    link
    fedilink
    English
    32 days ago

    The runner is open source, the model is not

    The service uses both so calling their service open source gives a false impression to 99,99% of users that don’t know better.

    • magic_lobster_party
      link
      fedilink
      02 days ago

      The model is as far as I know open, even for commercial use. This is in stark contrast with Meta’s models, which have (or had?) a bespoke community license restricting commercial use.

      Or is there anything that can’t be done with the DeepSeek model that I’m unaware of?

      • Fushuan [he/him]
        link
        fedilink
        English
        32 days ago

        The model is open, it’s not open source!

        How is it so hard to understand? The complete source of the model is not open. It’s not a hard concept.

        Sorry if I’m coming of as rude but I’m getting increasingly frustrated at having to explain a simple combination of two words that is pretty self explanatory.

        • magic_lobster_party
          link
          fedilink
          -12 days ago

          Ok I understand now why people are upset. There’s a disagreement with terminology.

          The source code for the model is open source. It’s defined in PyTorch. The source code for it is available with the MIT license. Anyone can download it and do whatever they want with it.

          The weights for the model are open, but it’s not open source, as it’s not source code (or an executable binary for that matter). No one is arguing that the model weights are open source, but there seem to be an argument against that the model is open source.

          And even if they provided the source code for the training script (and all its data), it’s unlikely anyone would reproduce the same model weights due to randomness involved. Training model weights is not like compiling an executable, because you’ll get different results every time.

          • Fushuan [he/him]
            link
            fedilink
            English
            12 days ago

            Hey, I have trained several models in pytorch, darknet, tensorflow.

            With the same dataset and the same training parameters, the same final iteration of training actually does return the same weights. There’s no randomness unless they specifically add random layers and that’s not really a good idea with RNNs it wasn’t when I was working with them at least. In any case, weights should converge into a very similar point even if randomness is introduced or else the RNN is pretty much worthless.

            • magic_lobster_party
              link
              fedilink
              12 days ago

              There’s usually randomness involved with the initial weights and the order the data is processed.

              • Fushuan [he/him]
                link
                fedilink
                English
                12 days ago

                Not enough for it to make results diverge. Randomness is added to avoid falling into local maximas in optimization. You should still end in the same global maxima. Models usualy run until their optimization converges.

                As stated, if the randomness is big enough that multiple reruns end up with different weights aka optimized for different maximas, the randomization is trash. Anything worth their salt won’t have randomization big enough.

                So, going back to my initial point, we need the training data to validate the weights. There are ways to check the performance of a model (quite literally, the same algorithm that is used to evaluate weights in training is them used to evaluate the trained weights post training) the performance should be identical up to a very small rounding error if a rerun with the same data and parameters is used.