• @[email protected]
    link
    fedilink
    English
    82 months ago

    LLMs have already reached the end of the line 🤔

    I don’t believe that. At least from an implementation perspective we’re extremely early on, and I don’t see why the tech itself can’t be improved either.

    Maybe it’s current iteration has hit a wall, but I don’t think anyone can really say what the future holds for it.

    • @jacksilver
      link
      English
      24
      edit-2
      2 months ago

      LLMs have been around since roughly 2016 2017 (comment below corrected me that Attention paper was 2017). While scaling the up has improved their performance/capabilities, there are fundamental limitations on the actual approach. Behind the scenes, LLMs (even multimodal ones like gpt4) are trying to predict what is most expected, while that can be powerful it means they can never innovate or be truth systems.

      For years we used things like tf-idf to vectorize words, then embeddings, now transformers (supped up embeddings). Each approach has it limits, LLMs are no different. The results we see now are surprisingly good, but don’t overcome the baseline limitations in the underlying model.

      • Todd Bonzalez
        link
        fedilink
        English
        72 months ago

        The “Attention Is All You Need” paper that birthed modern AI came out in 2017. Before Transformers, “LLMs” were pretty much just Markov chains and statistical language models.

        • @jacksilver
          link
          English
          22 months ago

          You’re right, I thought that paper came out in 2016.

    • @[email protected]
      link
      fedilink
      English
      62 months ago

      I’m not trained in formal computer science, so I’m unable to evaluate the quality of this paper’s argument, but there’s a preprint out that claims to prove that current computing architectures will never be able to advance to AGI, and that rather than accelerating, improvements are only going to slow down due to the exponential increase in resources necessary for any incremental advancements (because it’s an NP-hard problem). That doesn’t prove LLMs are end of the line, but it does suggest that additional improvements are likely to be marginal.

      Reclaiming AI as a theoretical tool for cognitive science

    • @Wooki
      link
      English
      -5
      edit-2
      2 months ago

      we’re extremely early on

      Oh really! The analysis has been established since the 80’s. Its so far from early on that statement is comical

      • Todd Bonzalez
        link
        fedilink
        English
        32 months ago

        Transformers, the foundation of modern “AI”, was proposed in 2017. Whatever we called “AI” and “Machine Learning” before that was mostly convolutional networks inspired by the 80’s “Neocognitron”, which is nowhere near as impressive.

        The most advanced thing a Convolutional network ever accomplished was DeepDream, and visual Generative AI has skyrocketed in the 10 years since then. Anyone looking at this situation who believes that we have hit bedrock is delusional.

        From DeepDream to Midjourney in 10 years is incredible. The next 10 years are going to be very weird.