• @FooBarrington
    link
    English
    15 hours ago

    My god.

    There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn’t any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.

    Now the promise from OpenAI (from their many papers, and press releases, and …) was that we’ll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.

    Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they’d keep the GPUs for themselves to widen their moat.

    But they’re not doing that, instead they’re scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don’t forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they’ll have to scale up massively to keep any advantage compared to the market. But they’re not doing that.

    • @Blue_Morpho
      link
      English
      03 hours ago

      “training a new model”

      Is equivalent to “make a new game” with better graphics.

      I’ve already explained that analogy several times.

      If people pay you for the existing model you have no reason to immediately train a better one.