• Pennomi
    link
    English
    17 months ago

    What we haven’t hit yet is the point of diminishing returns for model efficiency. Small, locally run models are still progressing rapidly, which means we’re going to see improvements for the everyday person instead of just for corporations with huge GPU clusters.

    That in turn allows more scientists with lower budgets to experiment on LLMs, increasing the chances of the next major innovation.

    • @CeeBee
      link
      17 months ago

      Exactly. We’re still very early days with this stuff.

      The next few years will be wild.