US chip-maker Nvidia led a rout in tech stocks Monday after the emergence of a low-cost Chinese generative AI model that could threaten US dominance in the fast-growing industry.

The chatbot developed by DeepSeek, a startup based in the eastern Chinese city of Hangzhou, has apparently shown the ability to match the capacity of US AI pace-setters for a fraction of the investments made by American companies.

Shares in Nvidia, whose semiconductors power the AI industry, fell more than 15 percent in midday deals on Wall Street, erasing more than $500 billion of its market value.

The tech-rich Nasdaq index fell more than three percent.

AI players Microsoft and Google parent Alphabet were firmly in the red while Meta bucked the trend to trade in the green.

DeepSeek, whose chatbot became the top-rated free application on Apple’s US App Store, said it spent only $5.6 million developing its model – peanuts when compared with the billions US tech giants have poured into AI.

US “tech dominance is being challenged by China,” said Kathleen Brooks, research director at trading platform XTB.

The focus is now on whether China can do it better, quicker and more cost effectively than the US, and if they could win the AI race,” she said.

US venture capitalist Marc Andreessen has described DeepSeek’s emergence as a “Sputnik moment” – when the Soviet Union shocked Washington with its 1957 launch of a satellite into orbit.

As DeepSeek rattled markets, the startup on Monday said it was limiting the registration of new users due to “large-scale malicious attacks” on its services.

Meta and Microsoft are among the tech giants scheduled to report earnings later this week, offering opportunity for comment on the emergence of the Chinese company.

Shares in another US chip-maker, Broadcom, fell 16 percent while Dutch firm ASML, which makes the machines used to build semiconductors, saw its stock tumble 6.7 percent.

Investors have been forced to reconsider the outlook for capital expenditure and valuations given the threat of discount Chinese AI models,” David Morrison, senior market analyst at Trade Nation.

These appear to be as good, if not better, than US versions.”

Wall Street’s broad-based S&P 500 index shed 1.7 percent while the Dow was flat at midday.

In Europe, the Frankfurt and Paris stock exchanges closed in the red while London finish flat.

Asian stock markets mostly slid.

Just last week following his inauguration, Trump announced a $500 billion venture to build infrastructure for AI in the United States led by Japanese giant SoftBank and ChatGPT-maker OpenAI.

SoftBank tumbled more than eight percent in Tokyo on Monday while Japanese semiconductor firm Advantest was also down more than eight percent and Tokyo Electron off almost five percent.

  • @Womble
    link
    English
    183 days ago

    ??? you dont use training data when running models, that’s what is used in training them.

    • @UnderpantsWeevil
      link
      English
      -213 days ago

      DeepSeek open-sourced their model. Go ahead and train it on different data and try again.

      • @Womble
        link
        English
        343 days ago

        Wow ok, you really dont know what you’re talking about huh?

        No I dont have thousands of almost top of the line graphics cards to retain an LLM from scratch, nor the millions of dollars to pay for electricity.

        I’m sure someone will and I’m glad this has been open sourced, its a great boon. But that’s still no excuse to sweep under the rug blatant censorship of topics the CCP dont want to be talked about.

        • @UnderpantsWeevil
          link
          English
          -183 days ago

          No I dont have thousands of almost top of the line graphics cards to retain an LLM from scratch

          Fortunately, you don’t need thousands of top of the line cards to train the DeepSeek model. That’s the innovation people are excited about. The model improves on the original LLM design to reduce time to train and time to retrieve information.

          Contrary to common belief, an LLM isn’t just a fancy Wikipedia. Its a schema for building out a graph of individual pieces of data, attached to a translation tool that turns human-language inputs into graph-search parameters. If you put facts about Tianamen Square in 1989 into the model, you’ll get them back as results through the front-end.

          You don’t need to be scared of technology just because the team that introduced the original training data didn’t configure this piece of open-source software the way you like it.

          that’s still no excuse to sweep under the rug blatant censorship of topics the CCP dont want to be talked about.

          Wow ok, you really dont know what you’re talking about huh?

            • @UnderpantsWeevil
              link
              English
              -14
              edit-2
              3 days ago

              The innovation is it’s comparatively cheap to train, compared to the billions

              Smaller builds with less comprehensive datasets take less time and money. Again, this doesn’t have to be encyclopedic. You can train your model entirely on a small sample of material detailing historical events in and around Beijing in 1989 if you are exclusively fixated on getting results back about Tienanmen Square.

              • @Womble
                link
                English
                12
                edit-2
                3 days ago

                Oh, by the way, as to your theory of “maybe it just doesnt know about Tiananmen, its not an encyclopedia”…

                • Dhs92
                  link
                  fedilink
                  13 days ago

                  I don’t think I’ve seen that internal dialog before with LLMs. Do you get that with most models when running using ollama?

                  • @Womble
                    link
                    English
                    3
                    edit-2
                    3 days ago

                    No it’s not a feature of ollama, thats the innovation of the “chain of thought” models like OpenAI’s o1 and now this deepseek model, it narrates an internal dialogue first in order to try and create more consistent answers. It isnt perfect but it helps it do things like logical reasoning at the cost of taking a lot longer to get to the answer.

              • @Womble
                link
                English
                93 days ago

                Ok sure, as I said before I am grateful that they have done this and open sourced it. But it is still deliberately politically censored, and no “Just train your own bro” is not a reasonable reply to that.

                • @[email protected]
                  link
                  fedilink
                  43 days ago

                  They know less than I do about LLMs of that’s something they think you can just DO… and that’s saying a lot.

            • @[email protected]
              link
              fedilink
              22 days ago

              Because the parent comment by Womble is about using the Chinese hosted DeepSeek app, not hosting the model themselves. The user above who responded either didn’t read the original comment carefully enough, or provided a very snarky response. Neither is particularly endearing.

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                2 days ago

                But yeah. Anyone who thinks the app / stock model isn’t going to be heavily censored…

                Why else would it be free? It’s absolutely state sponsored media. Or it’s a singularity and they’re just trying to get people to run it from within their networks, the former being far more plausible.

                And I know, I know, the latter isn’t how llms work. But would we really know when it isn’t?

              • @Womble
                link
                English
                2
                edit-2
                2 days ago

                No, that was me running the model on my own machine not using deepseek’s hosted one. What they were doing was justifying blatent politcal censorship by saying anyone could spend millions of dollars themselves to follow their method and make your own model.

                You’ll notice how they stopped replying to my posts and started replying to others once it became untenable to pretend it wasnt censorship.

              • @[email protected]
                link
                fedilink
                English
                12 days ago

                Lol well. When I saw this I knew the model would be censored to hell, and then the ccp abliteration training data repo made a lot more sense. That being said, the open source effort to reproduce it is far more appealing.