LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

  • @kromem
    link
    English
    28 months ago

    This isn’t quite correct. There is the possibility of biasing the results with the training data, but models are performing well at things they haven’t seen before.

    For example, this guy took an IQ test, rewrote the visual questions as natural language questions, and gave the test to various LLMs:

    https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq

    These are questions with specific wording that the models won’t have been trained on given he wrote them out fresh. Old models have IQ results that are very poor, but the SotA model right now scores a 100.

    People who are engaging with the free version of ChatGPT and think “LLMs are dumb” is kind of like talking to a moron human and thinking “humans are dumb.” Yes, the free version of ChatGPT has around a 60 IQ on that test, but it also doesn’t represent the cream of the crop.

    • @[email protected]
      link
      fedilink
      08 months ago

      Maybe, but this is giving the AI a lot of help. No one rewrites visual questions for humans who take IQ tests. That spacial reasoning is part of the test.

      In reality, no AI would pass any test because the first part is writing your name on the paper. Just doing that is beyond most AIs because they literally don’t have to deal with the real world. They don’t actually understand anything.

      • @kromem
        link
        English
        28 months ago

        They don’t actually understand anything.

        This isn’t correct and has been shown not to be correct in research over and over and over in the past year.

        The investigation reveals that Othello-GPT encapsulates a linear representation of opposing pieces, a factor that causally steers its decision-making process. This paper further elucidates the interplay between the linear world representation and causal decision-making, and their dependence on layer depth and model complexity.

        https://arxiv.org/abs/2310.07582

        Sizeable differences exist among model capabilities that are not captured by their ranking on popular LLM leaderboards (“cramming for the leaderboard”). Furthermore, simple probability calculations indicate that GPT-4’s reasonable performance on k=5 is suggestive of going beyond “stochastic parrot” behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

        We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods. Core to the framework is a self-discovery process where LLMs select multiple atomic reasoning modules such as critical thinking and step-by-step thinking, and compose them into an explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER substantially improves GPT-4 and PaLM 2’s performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT).

        Just a few of the relevant papers you might want to check out before stating things as facts.