We could have AI models in a couple years that hold the entire internet in their context window.

  • @[email protected]M
    link
    fedilink
    English
    51 year ago

    We could have AI models in a couple years that hold the entire internet in their context window.

    That’s a really bold claim.

    • Behohippy
      link
      English
      31 year ago

      Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.

        • @Luovahulluus
          link
          English
          2
          edit-2
          1 year ago

          For now, I’d be happy with an AI that had access to and remembered the beginning of our conversation.

    • @[email protected]OP
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      Perplexity has pretty much solved that since it searches the internet and uses the information it finds. But I don’t know about any advances to solve it directly in LLMs.