LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.

“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”

Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.

  • @JeeBaiChow
    link
    English
    12 hours ago

    Really wondering about the point of those PhDs the llms claim to have ‘passed’.

  • @QuarterSwede
    link
    English
    49
    edit-2
    12 hours ago

    Ugh. No one in the mainstream understands WHAT LLMs are and do. They’re really just basic input output mechanisms. They don’t understand anything. Garbage in, garbage out as it were.

    • @UnderpantsWeevil
      link
      English
      1810 hours ago

      They’re really just basic input output mechanisms.

      I mean, I’d argue they’re highly complex I/O mechanisms, which is how you get weird hallucinations that developers can’t easily explain.

      But expecting cognition out of a graph is like demanding novelty out of a plinko machine. Not only do you get out what you get in, but you get a very statistically well-determined output. That’s the whole point. The LLM isn’t supposed to be doing high level cognitive extrapolations. It’s supposed to be doing statistical aggregates on word association using a natural language schema.

    • snooggums
      link
      English
      711 hours ago

      That is accurate, but people who design and distribute the LLMs refer to the process as machine learning and use terms like hallucinations which is the primary cause of the confusion.

      • @SinningStromgald
        link
        English
        410 hours ago

        I think the problem is the use of the term AI. Regular Joe Schmo hears/sees AI and thinks Data from ST:NG or Cylons from Battlestar Galactica and not glorified search engine chatbots. But AI sounds cooler than LLM so they use AI.

        • @Grimy
          link
          English
          28 hours ago

          The term is fine. Your examples are very selective. I doubt Joe Schmo thought the aimbots in CoD were truly intelligent when he referred to them as AI.

  • IninewCrow
    link
    fedilink
    English
    2212 hours ago

    This isn’t new and noteworthy … because we humans don’t understand human history and fail miserably to understand or remember past failings in every generation.

  • aramis87
    link
    fedilink
    810 hours ago

    I was trying to solve a betweenle a couple weeks ago, had it down to the words immediately before and after the word-for-the-day, and couldn’t think of it. I went to three different AI engines and asked them what word was between those two, alphabetically. All three engines repeatedly gave me “answers” that did not occur between the two words. Like, I’d ask "what 5-letter English words are alphabetically between amber and amble, and they’d suggest aisle or armor. None of them understood ‘alphabetically’.

    • AnIndefiniteArticle
      link
      fedilink
      English
      88 hours ago

      Try asking one to write a sentence that ends with the letter “r”, or a poem that rhymes.

      They know words as black boxen with weights attached for how likely they are to appear in certain contexts. Prediction happens by comparing the chain of these boxes leading up to the current cursor and using weights and statistics to fill in the next box.

      They don’t understand that those words are made of letters unless they have been programmed to break each word down into its component letters/syllables. None of them have been programmed to do this because that increases the already astronomical compute and training costs.

      About a decade ago I played with an LLM whose markov chain did predictions based on what letter came next instead of what word came next (pretty easy modification of the base code). It was surprisingly comparably good at putting sentences and grammar together when working at the letter-scale. It also was horribly less efficient to train (which is saying something in comparison to word-level prediction LLMs) because it needs to consider many more units (letters vs words) leading up to the current one to maintain the same coherence. If the markov chain was looking at the past 10 words, a word-level prediction has 10 boxes to factor into its calculations and trainings. If those words have an average of 5 letters, then letter-level prediction needs to consider at least 50 boxes to maintain the same awareness of context within a sentence/paragraph. This is a five-fold increase in memory footprint, and an even greater increase in compute time (since most operations are at least of linear order and sometimes more).

      That efficiency hit would allow for LLMs to understand sub-word concepts like alphabetization, rhyming, root words, etc. The expense and energy requirements aren’t worth this modest expansion of understanding.

      Adding a General Purpose Transformer just adds some plasticity to those weights and statistics beyond the markov chain example I use above.

  • @AbouBenAdhem
    link
    English
    1211 hours ago

    For over a decade, complexity scientist Peter Turchin and his collaborators have worked to compile an unparalleled database of human history – the Seshat Global History Databank. Recently, Turchin and computer scientist Maria del Rio-Chanona turned their attention to artificial intelligence (AI) chatbots, questioning whether these advanced models could aid historians and archaeologists in interpreting the past.

    Peter Turchin and his collaborators don’t have a great record of understanding human history themselves—their basic shtick has been to try to validate an Enlightenment-era, linear view of human history with statistics from their less-than-rigorous database, with less-than-impressive results. I wouldn’t necessarily expect an AI to outperform them, but I wouldn’t trust their evaluation of it, either.

  • @Grimy
    link
    English
    7
    edit-2
    11 hours ago

    LLMs demonstrated greater accuracy when addressing questions about ancient history, particularly between 8,000 BCE and 3,000 BCE, but struggled with more recent events, especially from 1,500 CE to the present.

    I’m not entirely surprised by this. Llms are trained on the whole internet and not just the good part. There are groups online that are very vocal about things like the confederates being in the right for example. It would make sense to assume this essentially poisons the datasets. Realistically, no one is contesting history before that time.

    Not that it isn’t a problem and doesn’t need fixing, just that it makes “sense”.

  • @A_A
    link
    English
    210 hours ago

    Suppose A and B are at war and based on every insults they throw at each other, you train an LLM to explain what’s going on. Well, it will be quite bad. Maybe this is some part of the explanation.

    • Flying SquidOP
      link
      English
      410 hours ago

      But that’s exactly the problem. Humans with degrees in history can figure out what is an insult and what is a statement of fact a hell of a lot better than an LLM.

      • @A_A
        link
        English
        09 hours ago

        it took maybe thousands or even millions of years for nature to create animals that understand who they are and what’s going on around them. Give those machines a few more years, they are not all LLMs and they are advancing quite rapidly.
        Finally, i completely agree with you that, for the time being, they are very bad at playing historian.