• @Siegfried
    link
    English
    132 months ago

    No AI is a very real thing… just not LLMs, those are pure marketing

    • @[email protected]
      link
      fedilink
      English
      12 months ago

      The latest llms get a perfect score on the south Korean SAT and can pass the bar. More than pure marketing if you ask me. That does not mean 90% of business that claim ai are nothing more than marketing or the business that are pretty much just a front end for GPT APIs. llms like claud even check their work for hallucinations. Even if we limited all ai to llms they would still be groundbreaking.

      • @clutchtwopointzero
        link
        English
        162 months ago

        Korean SAT are highly standardized in multiple choice form and there is an immense library of past exams that both test takers and examiners use. I would be more impressed if the LLMs could show also step by step problem work out…

        • @[email protected]
          link
          fedilink
          English
          -52 months ago

          Claud 3.5 and o1 might be able to do that; if not, they are close to being able to do that. Still better than 99.99% of earthly humans

          • @[email protected]
            cake
            link
            fedilink
            English
            82 months ago

            You seem to be in the camp of believing the hype. See this write up of an apple paper detailing how adding simple statements that should not impact the answer to the question severely disrupts many of the top model’s abilities.

            In Bloom’s taxonomy of the 6 stages of higher level thinking I would say they enter the second stage of ‘understanding’ only in a small number of contexts, but we give them so much credit because as a society our supposed intelligence tests for people have always been more like memory tests.

            • @clutchtwopointzero
              link
              English
              1
              edit-2
              2 months ago

              Exactly… People are conflating the ability to parrot an answer based on machine-levels of recall (which is frankly impressive) vs the machine actually understanding something and being able to articulate how the machine itself arrived at a conclusion (which, in programming circles, would be similar to a form of “introspection”). LLM is not there yet