We could have AI models in a couple years that hold the entire internet in their context window.

  • Behohippy
    link
    English
    31 year ago

    Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.

      • @Luovahulluus
        link
        English
        2
        edit-2
        1 year ago

        For now, I’d be happy with an AI that had access to and remembered the beginning of our conversation.