I get the stupid basic excuse about big tech purchasing loads of silicon. That reasoning seems deeply flawed and idiotic to me. If Deepseek R1 democratizes training more for less, then that means customers with mid to small size data centers like universities now have access to train and research models in this space. Nvidia does not have a real competitor, so they get the sale. Their potential customer base just grew exponentially right? OpenAI should be devalued massively by this change, but I don’t see why anything impacts Nvidia negatively in this instance. Am I missing something or is the market this level of stupid? (I have no skin in this game)

  • @typicalproducts
    link
    2
    edit-2
    3 hours ago

    I disagree with other commentors that the market is irrational. It appears irrational in relation to the news because, contrary to what we are told, news doesn’t move markets, money moves markets (though, yes, some salient times news is what causes the money to move). Many times big news does nothing, and equally many times major movements occur with no news to speak of.

    The cause for big money movements into Nvidia seem pretty clear: this past year Nvida has become well known as an foundational buisness for the fastest growing new sector of artificial intelligence. As such, like amazon before it, it has become big enough to swallow it’s competitors, which makes it a good long position for investment. So, with good foundations and the lions share of the AI market, when the big investments drive up the price it is easy to get the financial news to drum up retail interest and explain why it is a good place for the average person to put their money, all of which drives the price up further.

    It seems to me more likely that this was a semi- (maybe only algorithmically) coordinated dump: it does not make a lot of sense to me that deepseek’s model would cause doubt in Nvidia’s valuation, as efficient models which briefly beat the leaders seem to pop up from time to time without the same armaggedon and retail investors are generally not keeping up with the bleeding edge of AI developments. Indeed, before the massive sell-off, AI news youtube videos on deepseek were not even particularly well watched.

    We are taught that the market reacts to news but, especially with dumps, its seems to me that more often news stories are cooked up as a cover for market movement (see, for instance, the way companies will send news briefs to cnbc to have them help pump and dump stocks).

    Maybe it was deekseek, or maybe it was Trump planning to tariff TSMC in Taiwan, or simply that the algorithms decided it was time for a little rug pull on retail.

    Whether the whales planed to sell together or the algorithms did I guess is not so important, though it is interesting that CNBC’s reporting mentioned that only a week ago big tech investors were discussing deepseek at davos.

    Market movements are only out of the ordinary if the big investors lose out. Nvidia, I think (and hope – because I’m invested), will remain a good long monopolistic stock like amazon, which will continue to have its cycle of good news pumps and bad news dumps as the big players prey on the fomo retail traders.

    • @j4k3OP
      link
      English
      11 hour ago

      I expect Nvidia to boom for at least another 5 years. There will eventually be an answer to the L2 to L1 bottleneck problem with the present CPU architecture. I fully expect that answer will come in RISC-V. Real hardware takes 10 years from napkin to first consumer products in hand. The key catalyst was shortly after the Llama model weights leaked. Anyone that knows about fundamental computing history, like the relevance of Bell Labs historically and that Meta AI is headed by Yann LeCun, should see the relevance of the “leak” in shaping the future. It will eventually be obvious that this event shaped the world. This is Apache versus Sun Micro and OpenAI is Sun. Within a couple of months of the leak the inadequacy of present CPU architecture should have been obvious. In the past all CPU coprocessor topology schemes have failed next to a competing integrated architecture. It is well established and should have immediately sparked new designs to fix the issue. That is when the real clock started for 10 years before the real solution comes to market.

      The real solution will likely be considerably slower in total speed and include many more logical cores with a new architecture that can throughput tensors efficiently. Anything that gets sold in the interm as hardware for AI is marketing wank. I doubt Intel AMD or Nvidia will have the real solution to this problem. The real solution will likely happen in a RISC-V ISA and from someone new like how traditional automakers are unlikely to succeed with EVs because the change is too radical of a rethink and requires throwing out too much of the supply chain while investing massively into areas like a software first design when these companies are wholly incompetent with even the most basic software on present and late model ICE vehicles. Intel and AMD have failed to keep parity with Nvidia and the GPU, while Nvidia has no experience with traditional architectures. I think all will be made irrelevant by a future monolithic replacement, but that has quite a ways to go before it happens. I also find it funny that the whole e-core zen-5 energy optimization push for hardware is cutting out the very instruction set required for the most efficient tensor math that x86 can support. The existence of e-cores and how the CPU scheduler and software works means that even the performant cores that have the underlying tensor math instructions cannot use them because e-cores exist. There is even speculative circumstantial evidence that Intel’s solution to people discovering that the AVX instructions for the P-cores was disabled in microcode only and enabling it with the server CPU microcode is the main issue that caused the massive issue with 13-14th gen processors. It is all happening when, if they were prepared and optimised for LLM inference, all p-core consumer processors with the most advanced AVX instructions to load 512 bit wide words in one cycle would have made a massive difference. There are even ridiculously useless Intel processors that have all e-core designs. Both Intel and AMD showed they were not ready for this shift at all, so to me that means they are not likely to have a real monolithic replacement for x86 in the works either. It would not surprise me if China were to realize the political power of open source as a socialist rally and push back against the West and roll out a RISC-V solution. I kinda hope they do. Restricting their access to Nvidia is pushing them in this direction.