Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • Fushuan [he/him]
    link
    fedilink
    English
    19 days ago

    Hey, I have trained several models in pytorch, darknet, tensorflow.

    With the same dataset and the same training parameters, the same final iteration of training actually does return the same weights. There’s no randomness unless they specifically add random layers and that’s not really a good idea with RNNs it wasn’t when I was working with them at least. In any case, weights should converge into a very similar point even if randomness is introduced or else the RNN is pretty much worthless.

    • magic_lobster_party
      link
      fedilink
      19 days ago

      There’s usually randomness involved with the initial weights and the order the data is processed.

      • Fushuan [he/him]
        link
        fedilink
        English
        19 days ago

        Not enough for it to make results diverge. Randomness is added to avoid falling into local maximas in optimization. You should still end in the same global maxima. Models usualy run until their optimization converges.

        As stated, if the randomness is big enough that multiple reruns end up with different weights aka optimized for different maximas, the randomization is trash. Anything worth their salt won’t have randomization big enough.

        So, going back to my initial point, we need the training data to validate the weights. There are ways to check the performance of a model (quite literally, the same algorithm that is used to evaluate weights in training is them used to evaluate the trained weights post training) the performance should be identical up to a very small rounding error if a rerun with the same data and parameters is used.