• @sosodev
    link
    English
    5711 months ago

    It sounds like the model is overfitting the training data. They say it scored 100% on the testing set of data which almost always indicates that the model has learned how to ace the training set but flops in the real world.

    I think we shouldn’t put much weight behind this news article. This is just more overblown hype for the sake of clicks.

    • LostXOR
      link
      fedilink
      911 months ago

      The article says they kept 15% of the data for testing, so it’s not overfitting. I’m still skeptical though.

      • @sosodev
        link
        English
        811 months ago

        I’m pretty sure it’s possible to overfit even with large testing sets.