• @brucethemoose
    link
    English
    1
    edit-2
    7 hours ago

    Gemini 1.5 used to be the best long context model around, by far.

    Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.

    Gemini 1.5 Pro is literally better than the new 2.0 Pro in some of my tests, especially long-context ones. I dunno what happened there, but yes, they probably overtuned it or something.