The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent graduate student. It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of “competent graduate student” is reached, at which point I could see this tool being of significant use in research level tasks.

  • @NegentropicBoy
    link
    English
    234 days ago

    O1 is (apparently) different according to some videos I watched, as it pulls apart the question and does some reasoning steps.

    • @aodhsishaj
      link
      English
      154 days ago

      I’d love to see one of those videos

        • @aodhsishaj
          link
          English
          23 days ago

          @NegentropicBoy English20•

          O1 is (apparently) different according to some videos I watched, as it pulls apart the question …

          Yes

      • @tee9000
        link
        English
        12 days ago

        Its what chaptgpt calls it.

      • @[email protected]OP
        link
        fedilink
        English
        3
        edit-2
        3 days ago

        LLMs are basically just good pattern matchers. But just like how A* search can find a better path than a human can by breaking the problem down into simple steps, so too can an LLM make progress on an unsolved problem if it’s used properly and combined with a formal reasoning engine.

        I’m going to be real with you: the big insight behind almost all new mathematical ideas is based on the math that came before. Nothing is truly original the way AI detractors seem to believe.

        By “does some reasoning steps,” OpenAI presumably are just invoking the LLM iteratively so that it can review its own output before providing a final answer. It’s not a new idea.