Each LLM is given the same 1000 chess puzzles to solve. See puzzles.csv. Benchmarked on Mar 25, 2024.

Model Solved Solved % Illegal Moves Illegal Moves % Adjusted Elo
gpt-4-turbo-preview 229 22.9% 163 16.3% 1144
gpt-4 195 19.5% 183 18.3% 1047
claude-3-opus-20240229 72 7.2% 464 46.4% 521
claude-3-haiku-20240307 38 3.8% 590 59.0% 363
claude-3-sonnet-20240229 23 2.3% 663 66.3% 286
gpt-3.5-turbo 23 2.3% 683 68.3% 269
claude-instant-1.2 10 1.0% 707 66.3% 245
mistral-large-latest 4 0.4% 813 81.3% 149
mixtral-8x7b 9 0.9% 832 83.2% 136
gemini-1.5-pro-latest* FAIL - - - -

Published by the CEO of Kagi!

  • @Cyyy
    link
    English
    22 months ago

    i mean… try using it for even simple stuff like designing code. Often, ChatGPT creates a fantasy library that does the task you ask chatgpt tk do… the library don’t exists, but chatgpt writes you code using that fantasy software library. Same with program functions who don’t exist.

    Same happens with stuff like People, telephone numbers, locations, books etc… tons of fantasy stuff.

    LLMs aren’t trustworthy for such stuff if you need real info and not just creative help with fantasy stuff. And even for those tasks it is usually not really good enough.

    • @Buffalox
      link
      English
      12 months ago

      Thanks, I wasn’t aware of that, the “fantasy” stuff was news to me when I read they make illegal moves to solve chess puzzles.
      There has been a lot of praise for how amazing these LLM models are, but apparently they have some pretty serious limitations, that might even make them dangerous to use, if you are not aware.

      • @Cyyy
        link
        English
        2
        edit-2
        2 months ago

        The issue with LLMs is, that they got trained with all kinds of data. so not just real scientific data but also fantasy (lies, books, movie scripts etc )… and nobody told the LLMs while training them what is fantasy or what not. so they only know how to generate text that looks “legit” without really knowing what is true and what not. so if you ask for a person and their personal details as an example… a LLM could generate real looking data that is just fantasy because it learned that such data looks like this. same goes for everything else like programming code, book titles, facts etc… LLMs just generates text in the correct format and which looks real, without caring if its real or not.

        • @Buffalox
          link
          English
          12 months ago

          I’m sure you are right, LLM isn’t intelligent enough to distinguish between fact and fantasy on it’s own, which IMO is a bit disappointing considering early reports about ChatGPT. Which were overwhelmingly positive. The AI is way more artificial than intelligent. Or as I saw earlier, the i in LLM is for intelligence. 😋