• @[email protected]
    link
    fedilink
    124
    edit-2
    10 days ago

    One of those rare lucid moments by the stock market? Is this the market correction that everyone knew was coming, or is some famous techbro going to technobabble some more about AI overlords and they return to their fantasy values?

    • @[email protected]
      link
      fedilink
      10010 days ago

      It’s quite lucid. The new thing uses a fraction of compute compared to the old thing for the same results, so Nvidia cards for example are going to be in way less demand. That being said Nvidia stock was way too high surfing on the AI hype for the last like 2 years, and despite it plunging it’s not even back to normal.

      • @jacksilver
        link
        2810 days ago

        My understanding is it’s just an LLM (not multimodal) and the train time/cost looks the same for most of these.

        I feel like the world’s gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I’m missing something that would probably account for the cost difference in current vs previous iterations.

        • @[email protected]
          link
          fedilink
          English
          3810 days ago

          The thing is that R1 is being compared to gpt4 or in some cases gpt4o. That model cost OpenAI something like $80M to train, so saying it has roughly equivalent performance for an order of magnitude less cost is not for nothing. DeepSeek also says the model is much cheaper to run for inferencing as well, though I can’t find any figures on that.

          • @jacksilver
            link
            210 days ago

            My main point is that gpt4o and other models it’s being compared to are multimodal, R1 is only a LLM from what I can find.

            Something trained on audio/pictures/videos/text is probably going to cost more than just text.

            But maybe I’m missing something.

            • @[email protected]
              link
              fedilink
              English
              2210 days ago

              The original gpt4 is just an LLM though, not multimodal, and the training cost for that is still estimated to be over 10x R1’s if you believe the numbers. I think where R 1 is compared to 4o is in so-called reasoning, where you can see the chain of though or internal prompt paths that the model uses to (expensively) produce an output.

              • @jacksilver
                link
                3
                edit-2
                10 days ago

                I’m not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.

                The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.

                However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude’s computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.

                Edit: and I think the real money will be in the more complex models focused on workflows automation.

              • veroxii
                link
                fedilink
                410 days ago

                Holy smoke balls. I wonder what else they have ready to release over the next few weeks. They might have a whole suite of things just waiting to strategically deploy

      • @[email protected]
        link
        fedilink
        1010 days ago

        How is the “fraction of compute” being verified? Is the model available for independent analysis?

        • @[email protected]
          link
          fedilink
          2810 days ago

          Its freely availible with a permissive license, but I dont think that that claim has been verified yet.

          • @[email protected]
            link
            fedilink
            English
            810 days ago

            And the data is not available. Knowing the weights of a model doesn’t really tell us much about its training costs.

      • davel [he/him]
        link
        fedilink
        English
        410 days ago

        If AI is cheaper, then we may use even more of it, and that would soak up at least some of the slack, though I have no idea how much.

    • scratsearcher 🔍🔮📊🎲
      link
      fedilink
      English
      29 days ago

      Most rational market: Sell off NVIDIA stock after Chinese company trains a model on NVIDIA cards.

      Anyways NVIDIA still up 1900% since 2020 …

      how fragile is this tower?