• @Hackworth
    link
    English
    1318 hours ago

    True, but they also released a paper that detailed their training methods. Is the paper sufficiently detailed such that others could reproduce those methods? Beats me.

    • @KingRandomGuy
      link
      English
      238 minutes ago

      I would say that in comparison to the standards used for top ML conferences, the paper is relatively light on the details. But nonetheless some folks have been able to reimplement portions of their techniques.

      ML in general has a reproducibility crisis. Lots of papers are extremely hard to reproduce, even if they’re open source, since the optimization process is partly random (ordering of batches, augmentations, nondeterminism in GPUs etc.), and unfortunately even with seeding, the randomness is not guaranteed to be consistent across platforms.