• @jacksilver
    link
    283 days ago

    My understanding is it’s just an LLM (not multimodal) and the train time/cost looks the same for most of these.

    I feel like the world’s gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I’m missing something that would probably account for the cost difference in current vs previous iterations.

    • @[email protected]
      link
      fedilink
      English
      383 days ago

      The thing is that R1 is being compared to gpt4 or in some cases gpt4o. That model cost OpenAI something like $80M to train, so saying it has roughly equivalent performance for an order of magnitude less cost is not for nothing. DeepSeek also says the model is much cheaper to run for inferencing as well, though I can’t find any figures on that.

      • @jacksilver
        link
        23 days ago

        My main point is that gpt4o and other models it’s being compared to are multimodal, R1 is only a LLM from what I can find.

        Something trained on audio/pictures/videos/text is probably going to cost more than just text.

        But maybe I’m missing something.

        • @[email protected]
          link
          fedilink
          English
          223 days ago

          The original gpt4 is just an LLM though, not multimodal, and the training cost for that is still estimated to be over 10x R1’s if you believe the numbers. I think where R 1 is compared to 4o is in so-called reasoning, where you can see the chain of though or internal prompt paths that the model uses to (expensively) produce an output.

          • @jacksilver
            link
            3
            edit-2
            3 days ago

            I’m not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.

            The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.

            However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude’s computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.

            Edit: and I think the real money will be in the more complex models focused on workflows automation.

          • veroxii
            link
            fedilink
            42 days ago

            Holy smoke balls. I wonder what else they have ready to release over the next few weeks. They might have a whole suite of things just waiting to strategically deploy