For OpenAI, o1 represents a step toward its broader goal of human-like artificial intelligence. More practically, it does a better job at writing code and solving multistep problems than previous models. But it’s also more expensive and slower to use than GPT-4o. OpenAI is calling this release of o1 a “preview” to emphasize how nascent it is.

The training behind o1 is fundamentally different from its predecessors, OpenAI’s research lead, Jerry Tworek, tells me, though the company is being vague about the exact details. He says o1 “has been trained using a completely new optimization algorithm and a new training dataset specifically tailored for it.”

OpenAI taught previous GPT models to mimic patterns from its training data. With o1, it trained the model to solve problems on its own using a technique known as reinforcement learning, which teaches the system through rewards and penalties. It then uses a “chain of thought” to process queries, similarly to how humans process problems by going through them step-by-step.

At the same time, o1 is not as capable as GPT-4o in a lot of areas. It doesn’t do as well on factual knowledge about the world. It also doesn’t have the ability to browse the web or process files and images. Still, the company believes it represents a brand-new class of capabilities. It was named o1 to indicate “resetting the counter back to 1.”

I think this is the most important part (emphasis mine):

As a result of this new training methodology, OpenAI says the model should be more accurate. “We have noticed that this model hallucinates less,” Tworek says. But the problem still persists. “We can’t say we solved hallucinations.”

  • @Defaced
    link
    English
    233 months ago

    OpenAI doesn’t want you to know that though, they want their work to show progress so they get more investor money. It’s pretty fucking disgusting and dangerous to call this tech any form of artificial intelligence. The homogeneous naming conventions to make this tech sound human is also dangerous and irresponsible.

    • @[email protected]
      link
      fedilink
      English
      123 months ago

      It is literally artificial intelligence though. Just because chatGPT doesn’t perform as a layperson imagined it would, it doesn’t mean it’s not AI. They just have an unrealistic expectation of what counts as AI along with the common misconception of AI and AGI being the same thing.

      A chess playing robot uses artificial intelligence as well. It’s a narrow AI, meaning it can do one thing really well but that doesn’t translate to other things. AGI on the other hand stands for Artificial General Intelligence. Humans are an example of general intelligence meaning that we have the cognitive ability to perform well on several unrelated tasks.

        • @[email protected]
          link
          fedilink
          English
          5
          edit-2
          3 months ago

          I don’t see the need to be such a dick about it. The term AGI was coined in the 90’s.

          • @Cryophilia
            link
            English
            -53 months ago

            It offends me when hype chasers do this to try and legitimize their snake oil. I don’t care what like 5 random researchers mentioned one time in the 90s, it does not justify calling a language prediction model “AI”. That’s not what the term has ever meant.

            • @[email protected]
              link
              fedilink
              English
              5
              edit-2
              3 months ago

              That’s a bit like taking issue with the terms jig, spinner, spoon, and fly, and saying you don’t care what some random fishermen call them; to the rest of us, they’re just lures.

              AGI is a subcategory of AI. We’ve had AGI systems in science fiction for decades, but we’ve just been calling them AI, which isn’t wrong, but it’s an unspecific term. AI is broad and encompasses everything from predictive text input to AGI and beyond. Every AGI system is also AI, but not everything AI is generally intelligent. ASI (Artificial Super Intelligence) would be an even more specific term, referring to something that is not only artificial and generally intelligent but exceeds human intelligence.

              Artificial intelligence

              The ability of a computer or other machine to perform those activities that are normally thought to require intelligence.

    • @tee9000
      link
      English
      43 months ago

      Their work is making progress. What is irresponsible or dangerous? Im not understanding what you mean.

      • @Defaced
        link
        English
        5
        edit-2
        3 months ago

        It’s irresponsible because making it sound like it’s true AI when it’s not is going to make it difficult to pull the plug when things go wrong and you’ll have the debate of whether it’s sentient or not and if it’s humane to kill it like a pet or a criminal. It’s more akin to using rainbow tables to help crack passwords and claiming your software is super powerful when in reality it’s nothing without the tables. (Very very rudimentary example that’s not supposed to be taken at face value).

        It’s dangerous because talking about AI like it’s a reasoning/thinking thing is just not true, and we’re already seeing the big AI overlords try to justify how they created it with copyrighted material, which means the arguments over copyrighted material are being made and we’ll soon see those companies claim that it’s no different than a child looking up something on Google. It’s irresponsible because it screws over creative people and copyright holders that genuinely made a product or piece of art or book or something in their own free time and now it’s been ripped away to be used to create something else that will eventually push those copyright holders out.

        The AI market is moving faster than the world is capable of keeping up with it, and that is a dangerous precedent to set for the future of this market. And for the record I don’t think we’re dealing with early generations of skynet or anything like that, we’re dealing with tools that have the capability to create economical collapse on a scale we’ve never seen, and if we don’t lay the ground rules now, then we will be in trouble.

        Edit: A great example of this is https://v0.dev/chat it has the potential to put front end developers out of work and jobless. It’s simple now but give it time and it has the potential to create a frontend that rivals the best UX designs if the prompt is right.

        • @tee9000
          link
          English
          3
          edit-2
          3 months ago

          I appreciate the effortful response but i dont think regulators would get caught up on colloquial names when weighing benefit versus harm and deciding to do something like ban a model.

          We just arent close enough to the same perspective to discuss it further. Thanks again for the good faith clarification.

          • @jimmy90
            link
            English
            13 months ago

            I think over-selling the “AI” with “reasoning/thinking” language becomes fraudulent and encourages inappropriate/dangerous applications.

            • @tee9000
              link
              English
              23 months ago

              Why does ai that has a “reasoning” step become dangerous?

              • @[email protected]
                link
                fedilink
                English
                13 months ago

                It will be used to take control over peoples lives.

                In any simple way it may be - denying job/insurance/care/etc, it will be hailed as using ‘reason’, while it just repeats patterns from the training sets.

                It does not ‘reason’, because it can’t. Trying to sell it as such is very dangerous as it will be used against people, and it’s dishonest for the investors as well, as they will jump on it even though it’s not ‘true’ and it never will be for this model.

                https://softwarecrisis.dev/letters/llmentalist/

                • @tee9000
                  link
                  English
                  13 months ago

                  I agree that is a bit of an ethical minefield to employ it to make decisions that affect peoples livelihood. But my point is if a company uses it to decide if an insurance claim should be paid out, the models ability to make those decisions isnt changed by what we call the steps it takes to come to a decision.

                  If an insurance company can dissect any particular claim decision and agree with each step the model took, then is it really different than having someone do it? Might it be better in some ways? A real concern is the fact that ai isnt perfect and mistakes made are pretty hard to accept… seems pretty dystopian i get that. But if less mistakes are made and you can still appeal decisions then maybe its overblown?

                  • @[email protected]
                    link
                    fedilink
                    English
                    13 months ago

                    LLMs just repeat training sets - so every mistake is repeated forever.

                    Every bias is locked in and can’t be fixed.

                    So you just deny people and expect them to appeal everything… sounds like you are offloading costs on the victims.

                    Shit like that is what makes it demonic.

              • @jimmy90
                link
                English
                13 months ago

                assuming that “AI” has “reasoning” and using it in applications that require that is dangerous

                • @tee9000
                  link
                  English
                  33 months ago

                  Sorry but thats not an explanation of your position, thats restating what you just said.