Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • @WraithGear
    link
    English
    -12 days ago

    Are you talking source as in source code? Or are you talking about source as in the data the llm uses? Because the source code is available. The weights are not the output, they are a function. The LLM response is The output

    but the weights can be changed, the input data can be changed. And if they are… it’s still deepseek and if you can change them they are not what makes deepseek; deepseek.

    I use boot.dev it has an AI. But they changed the data set to only cover relevant topics, and changed its weights, and gave it tone instruction. And wile it plays a character, it’s still chatgpt.

    • @[email protected]
      link
      fedilink
      52 days ago

      I used the word “source” a couple times in that post… The first time was in a general sense, as an input to generate an output. The training data is the source, the model is the “function” (using the mathematics definition here, NOT the computer science definition!), and the weights are the output. The second use was “source code.”

      Weights can be changed just like a compiled binary can be changed. Closed source software can be modified without having access to the source code.

      • @WraithGear
        link
        English
        -2
        edit-2
        2 days ago

        The LLM is a machine that when simplified down takes two inputs. A data set, and weight variables. These two inputs are not the focus of the software, as long as the structure is valid, the machine will give an output. The input is not the machine, and the machines source code is open source. The machine IS what is revolutionary about this LLM. Its not being praised because its weights are fine tuned, it didn’t sink Nvidia’s stock price by 700 billion because it has extra special training data. Its special because of its optimizations, and its novel method of using two halves to bounce ideas back and forth and to value its answers. Its the methodology of its function. And that is given to you open to see its source code

        • @[email protected]
          link
          fedilink
          7
          edit-2
          2 days ago

          I don’t know what, if any, CS background you have, but that is way off. The training dataset is used to generate the weights, or the trained model. In the context of building a trained LLM model, the input is the dataset and the output is the trained model, or weights.

          It’s more appropriate to call deepseek “open-weight” rather than open-source.

    • Fushuan [he/him]
      link
      fedilink
      English
      12 days ago

      What most people understand as deepseek is the app thauses their trained model, not the running or training engines.

      This post mentions open source, not open source code, big distinction. The source of a trained model is part the training engine, and way bigger part the input data. We only have access to a fraction of that “source”. So the service isn’t open source.

      Just to make clear, no LLM service is open source currently.