LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

  • @A_A
    link
    English
    210 hours ago

    Suppose A and B are at war and based on every insults they throw at each other, you train an LLM to explain what’s going on. Well, it will be quite bad. Maybe this is some part of the explanation.

    • Flying SquidOP
      link
      English
      410 hours ago

      But that’s exactly the problem. Humans with degrees in history can figure out what is an insult and what is a statement of fact a hell of a lot better than an LLM.

      • @A_A
        link
        English
        010 hours ago

        it took maybe thousands or even millions of years for nature to create animals that understand who they are and what’s going on around them. Give those machines a few more years, they are not all LLMs and they are advancing quite rapidly.
        Finally, i completely agree with you that, for the time being, they are very bad at playing historian.